Skip to content

lacss.modules

lacss.modules.Lacss

Bases: Module

Main class for LACSS model

Attributes:

Name Type Description
backbone ConvNeXt

The ConvNeXt backbone

lpn LPN

The LPN head for detecting cell location

detector Detector

A weight-less module interpreting lpn output

segmentor Segmentor

The segmentation head

__call__(image, gt_locations=None, *, training=None)

Parameters:

Name Type Description Default
image ArrayLike

[H, W, C]

required
gt_locations ArrayLike | None

[M, 2] if training, otherwise None

None

Returns: a dict of model outputs

from_config(config) classmethod

Factory method to build an Lacss instance from a configuration dictionary

Parameters:

Name Type Description Default
config dict

A configuration dictionary.

required

Returns:

Type Description

An Lacss instance.

get_config()

Convert to a configuration dict. Can be serialized with json

Returns:

Name Type Description
config dict

a configuration dict

lacss.modules.ConvNeXt

Bases: Module

ConvNeXt CNN backbone

Attributes:

Name Type Description
patch_size int

Stem patch size

depths Sequence[int]

Number of blocks at each stage.

dims Sequence[int]

Feature dimension at each stage.

drop_path_rate float

Stochastic depth rate.

layer_scale_init_value float

Init value for Layer Scale.

out_channels int

FPN output channels. Setting this to -1 disable the FPN, in which case the model output only encoder outputs.

__call__(x, *, training=None)

Parameters:

Name Type Description Default
x ArrayLike

Image input.

required
training Optional[bool]

Whether run the network in training mode (i.e. with stochastic depth)

None

Returns:

Type Description
tuple[DataDict, DataDict]

A tuple of (encoder_outputs, decoder_outputs). Both are dictionaries mapping feature scale (e.g. "2") to features. If out_channels is -1, the decoder_output is None.

get_imagenet_weights(model_type)

Get imagenet weights

Parameters:

Name Type Description Default
model_type str

The expected model specification. This must match the current instance attributes.

  • tiny: dims=(96, 192, 384, 768), depths=(3,3,9,3)
  • small: dims=(96, 192, 384, 768), depths=(3,3,27,3)
  • base: dims=(128, 256, 512, 1024), depths=(3,3,27,3)
  • large: dims=(192, 384, 768, 1536), depths=(3,3,27,3)
  • X-large: dims=(256, 512, 1024, 2048), depths=(3,3,27,3)
required

Returns:

Type Description
Params

A frozen dict representing weights of the current module

lacss.modules.LPN

Bases: Module

Location detection head

Attributes:

feature_levels: Input feature level, e.g. [2, 3, 4]
conv_spec: Conv layer specification
detection_roi: Parameter for label smoothing

__call__(inputs, scaled_gt_locations=None)

Parameters:

Name Type Description Default
inputs Mapping[str, ArrayLike]

feature dict: {'lvl': [H, W, C]}

required
scaled_gt_locations Optional[ArrayLike]

scaled 0..1 [N, 2], only valid in training

None

Returns:

Type Description
dict

A dict of features

  • lpn_scores: dict: {'lvl': [H, W, 1]}
  • lpn_regressions: dict {'lvl': [H, W, 2]}
  • gt_lpn_scores: dict {'lvl': [H, W, 1]}, only if training
  • gt_lpn_regressions: dict {'lvl': [H, W, 2]}, only if training

lacss.modules.Detector

Bases: Module

A weightless module that conver LPN output to a list of locations. Non-max-supression is appplied to remove redundant detections.

Attributes:

train_nms_threshold: Threshold for non-max-suppression during training
train_pre_nms_topk: If > 0, only the top_k scored locations will be analyzed during training
train_max_output: Maximum number of outputs during training
train_min_score: Mininum scores to be considered during training
max_proposal_offset: During training, if a detected location is with in this distance threshold
    of a ground-truth location, replacing the ground truth location with the detection one for
    segmentation purpose.
test_nms_threshold: Threshold for non-max-suppression during testing
test_pre_nms_topk: If > 0, only the top_k scored locations will be analyzed during testing
test_max_output: Maximum number of outputs during testing
test_min_score: Mininum scores to be considered during testing

__call__(scores, regressions, gt_locations=None, *, training=None)

Parameters:

Name Type Description Default
scores Mapping[str, ArrayLike]

{'scale', [H, W, 1]) output from LPN

required
regressions Mapping[str, ArrayLike]

{'scale': [H, W, 2]} output from LPN

required
gt_locations Optional[ArrayLike]

Ground-truth call locations. This can be an array of [N, 2] (training) or None (inference). Maybe padded with -1

None
training Optional[bool]

Whether to run the module in training mode or not

None

Returns:

Type Description
DataDict

a dictionary of values.

  • pred_locations: Sorted array based on scores
  • pred_scores: Sorted array, padded with -1
  • training_locations: This value exists during training only.

lacss.modules.Segmentor

Bases: Module

LACSS segmentation head.

Attributes:

Name Type Description
feature_level int

The scale of the feature used for segmentation

conv_spec tuple[Sequence[int], Sequence[int]]

conv_block definition, e.g. ((384,384,384), (64,))

instance_crop_size int

Crop size for segmentation.

with_attention int

Whether use spatial attention layer.

learned_encoding bool

Whether to use hard-coded position encoding.

encoder_dims Sequence[int]

Dim of the position encoder, if using learned encoding. Default is (8,8,4)

__call__(features, locations)

Parameters:

Name Type Description Default
features Mapping[str, ArrayLike]

{'scale' [H, W, C]} feature dictionary from the backbone.

required
locations ArrayLike

[N, 2] normalized to image size.

required

Returns:

Type Description
DataDict

A dictionary of values representing segmentation outputs.

  • instance_output: [N, crop_size, crop_size]
  • instance_mask; [N, 1, 1] boolean mask indicating valid outputs
  • instance_yc: [N, crop_size, crop_size] meshgrid y coordinates
  • instance_xc: [N, crop_size, crop_size] meshgrid x coordinates