objective_functions package

Submodules

objective_functions.cca module

Implements losses for CCA.

class objective_functions.cca.CCALoss(outdim_size, use_all_singular_values, device)

Bases: Module

Implements Loss for CCA.

__init__(outdim_size, use_all_singular_values, device)

Initialize CCALoss Object.

Parameters:
  • outdim_size (int) – Output Dimension for TopK

  • use_all_singular_values (bool) – Whether to include all singular values in the loss.

  • device (torch.device) – What device to place this module on. Must agree with model.

forward(H1, H2)

Apply the CCALoss as described in the paper to inputs H1 and H2.

Parameters:
  • H1 (torch.Tensor) – Tensor corresponding to the first random variable in CCA.

  • H2 (torch.Tensor) – Tensor corresponding to the second random variable in CCA.

Returns:

CCALoss for this pair.

Return type:

torch.Tensor

training: bool

objective_functions.contrast module

Implement objectives for contrastive loss.

class objective_functions.contrast.AliasMethod(probs)

Bases: object

Initializes a generic method to sample from arbritrary discrete probability methods.

Sourced From https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/. Alternatively, look here for more details: http://cgi.cs.mcgill.ca/~enewel3/posts/alias-method/index.html.

__init__(probs)

Initialize AliasMethod object.

Parameters:

probs (list[int]) – List of probabilities for each object. Can be greater than 1, but will be normalized.

cuda()

Generate CUDA version of self, for GPU-based sampling.

draw(N)

Draw N samples from multinomial dkstribution, based on given probability array.

Parameters:

N – number of samples

Returns:

samples

class objective_functions.contrast.MultiSimilarityLoss

Bases: Module

Implements MultiSimilarityLoss.

__init__()

Initialize MultiSimilarityLoss Module.

forward(feats, labels)

Apply MultiSimilarityLoss to Tensor Inputs.

Parameters:
  • feats (torch.Tensor) – Features

  • labels (torch.Tensor) – Labels

Returns:

Loss output.

Return type:

torch.Tensor

training: bool
class objective_functions.contrast.NCEAverage(inputSize, outputSize, K, T=0.07, momentum=0.5, use_softmax=False)

Bases: Module

Implements NCEAverage Loss Function.

__init__(inputSize, outputSize, K, T=0.07, momentum=0.5, use_softmax=False)

Instantiate NCEAverage Loss Function.

Parameters:
  • inputSize (int) – Input Size

  • outputSize (int) – Output Size

  • K (float) – K Value. See paper for more.

  • T (float, optional) – T Value. See paper for more. Defaults to 0.07.

  • momentum (float, optional) – Momentum for NCEAverage Loss. Defaults to 0.5.

  • use_softmax (bool, optional) – Whether to use softmax or not. Defaults to False.

forward(l, ab, y, idx=None)

Apply NCEAverage Module.

Parameters:
  • l (torch.Tensor) – Labels

  • ab (torch.Tensor) – See paper for more.

  • y (torch.Tensor) – True values.

  • idx (torch.Tensor, optional) – See paper for more. Defaults to None.

Returns:

_description_

Return type:

_type_

training: bool
class objective_functions.contrast.NCECriterion(n_data)

Bases: Module

Implements NCECriterion Loss.

Eq. (12): L_{NCE}

__init__(n_data)

Instantiate NCECriterion Loss.

forward(x)

Apply NCECriterion to Tensor Input.

Parameters:

x (torch.Tensor) – Tensor Input

Returns:

Loss

Return type:

torch.Tensor

training: bool
class objective_functions.contrast.NCESoftmaxLoss

Bases: Module

Implements Softmax cross-entropy loss (a.k.a., info-NCE loss in CPC paper).

__init__()

Instantiate NCESoftmaxLoss Module.

forward(x)

Apply NCESoftmaxLoss to Layer Input.

Parameters:

x (torch.Tensor) – Layer Input

Returns:

Layer Output

Return type:

torch.Tensor

training: bool

objective_functions.objectives_for_supervised_learning module

Implements various objectives for supervised learning objectives.

objective_functions.objectives_for_supervised_learning.CCA_objective(out_dim, cca_weight=0.001, criterion=CrossEntropyLoss())

Define loss function for CCA.

Parameters:
  • out_dim – output dimension

  • cca_weight – weight of cca loss

  • criterion – criterion for supervised loss

objective_functions.objectives_for_supervised_learning.MFM_objective(ce_weight, modal_loss_funcs, recon_weights, input_to_float=True, criterion=CrossEntropyLoss())

Define objective for MFM.

Parameters:
  • ce_weight – the weight of simple supervised loss

  • model_loss_funcs – list of functions that takes in reconstruction and input of each modality and compute reconstruction loss

  • recon_weights – list of float values indicating the weight of reconstruction loss of each modality

  • criterion – the loss function for supervised loss (default CrossEntropyLoss)

objective_functions.objectives_for_supervised_learning.MVAE_objective(ce_weight, modal_loss_funcs, recon_weights, input_to_float=True, annealing=1.0, criterion=CrossEntropyLoss())

Define objective for MVAE.

Parameters:
  • ce_weight – the weight of simple supervised loss

  • model_loss_funcs – list of functions that takes in reconstruction and input of each modality and compute reconstruction loss

  • recon_weights – list of float values indicating the weight of reconstruction loss of each modality

  • input_to_float – boolean deciding if we should convert input to float or not.

  • annealing – the annealing factor, i.e. the weight of kl.

  • criterion – the loss function for supervised loss (default CrossEntropyLoss)

objective_functions.objectives_for_supervised_learning.RMFE_object(reg_weight=1e-10, criterion=BCEWithLogitsLoss(), is_packed=False)

Define loss function for RMFE.

Parameters:
  • model – model used for inference

  • reg_weight – weight of regularization term

  • criterion – criterion for supervised loss

  • is_packed – packed for LSTM or not

objective_functions.objectives_for_supervised_learning.RefNet_objective(ref_weight, criterion=CrossEntropyLoss(), input_to_float=True)

Define loss function for RefNet.

Parameters:
  • ref_weight – weight of refiner loss

  • criterion – criterion for supervised loss

  • input_to_float – whether to convert input to float or not

objective_functions.recon module

Implements various reconstruction losses for MIMIC MVAE.

objective_functions.recon.elbo_loss(modal_loss_funcs, weights, annealing=1.0)

Create wrapper function that computes the model ELBO (Evidence Lower Bound) loss.

objective_functions.recon.nosigmloss1d(a, b)

Get 1D sigmoid loss, WITHOUT applying the sigmoid function to the inputs beforehand.

Parameters:
  • a (torch.Tensor) – Predicted output

  • b (torch.Tensor) – True output

Returns:

Loss

Return type:

torch.Tensor

objective_functions.recon.recon_weighted_sum(modal_loss_funcs, weights)

Create wrapper function that computes the weighted model reconstruction loss.

objective_functions.recon.sigmloss1d(a, b)

Get 1D sigmoid loss, applying the sigmoid function to the inputs beforehand.

Parameters:
  • a (torch.Tensor) – Predicted output

  • b (torch.Tensor) – True output

Returns:

Loss

Return type:

torch.Tensor

objective_functions.recon.sigmloss1dcentercrop(adim, bdim)

Get 1D sigmoid loss, cropping the inputs so that they match in size.

Parameters:
  • adim (int) – Predicted output size

  • bdim (int) – True output size. Assumed to have larger size than predicted.

Returns:

Loss function, taking in a and b respectively.

Return type:

fn

objective_functions.regularization module

Implements the paper: “Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies” NeurIPS 2020.

class objective_functions.regularization.Perturbation

Bases: object

Utility class for tensor perturbation techniques.

classmethod get_expanded_logits(logits: Tensor, n_samples: int, logits_flg: bool = True) Tensor

Perform Softmax and then expand the logits depends on the num_eval_samples :param logits_flg: whether the input is logits or softmax :param logits: tensor holds logits outputs from the model :param n_samples: times to duplicate :return:

classmethod perturb_tensor(tens: Tensor, n_samples: int, perturbation: bool = True) Tensor

Flatting the tensor, expanding it, perturbing and reconstructing to the original shape.

Note, this function assumes that the batch is the first dimension.

Parameters:
  • tens – Tensor to manipulate.

  • n_samples – times to perturb

  • perturbation – False - only duplicating the tensor

Returns:

tensor in the shape of [batch, samples * num_eval_samples]

class objective_functions.regularization.RegParameters(lambda_: float = 1e-10, norm: float = 2.0, estimation: str = 'ent', optim_method: str = 'max_ent', n_samples: int = 10, grad: bool = True)

Bases: object

This class controls all the regularization-related properties

__init__(lambda_: float = 1e-10, norm: float = 2.0, estimation: str = 'ent', optim_method: str = 'max_ent', n_samples: int = 10, grad: bool = True)

Initialize RegParameters Object.

Parameters:
  • lambda (float, optional) – Lambda value. Defaults to 1e-10.

  • norm (float, optional) – Norm value. Defaults to 2.0.

  • estimation (str, optional) – Regularization estimation. Defaults to ‘ent’.

  • optim_method (str, optional) – Optimization method. Defaults to ‘max_ent’.

  • n_samples (int, optional) – Number of samples . Defaults to 10.

  • grad (bool, optional) – Whether to regularize gradient or not. Defaults to True.

class objective_functions.regularization.Regularization

Bases: object

Class that in charge of the regularization techniques

classmethod get_batch_norm(grad: Tensor, loss: Optional[Tensor] = None, estimation: str = 'ent') Tensor

Calculate the expectation of the batch gradient :param loss: :param estimation: :param grad: tensor holds the gradient batch :return: approximation of the required expectation

classmethod get_batch_statistics(loss: Tensor, n_samples: int, estimation: str = 'ent') Tensor

Calculate the expectation of the batch gradient :param n_samples: :param loss: :param estimation: :return: Influence expectation

classmethod get_regularization_term(inf_scores: Tensor, norm: float = 2.0, optim_method: str = 'max_ent') Tensor

Compute the regularization term given a batch of information scores :param inf_scores: tensor holding a batch of information scores :param norm: defines which norm to use (1 or 2) :param optim_method: Define optimization method (possible methods: “min_ent”, “max_ent”, “max_ent_minus”,

“normalized”)

Returns:

class objective_functions.regularization.RegularizationLoss(loss: Module, model: Module, delta: float = 1e-10, is_pack: bool = True)

Bases: Module

Define the regularization loss.

__init__(loss: Module, model: Module, delta: float = 1e-10, is_pack: bool = True) None

Initialize RegularizationLoss Object

Parameters:
  • loss (torch.nn.Module) – Loss from which to compare output of model with predicted output

  • model (torch.nn.Module) – Model to apply regularization loss to.

  • delta (float, optional) – Strength of regularization loss. Defaults to 1e-10.

  • is_pack (bool, optional) – Whether samples are packaed or not.. Defaults to True.

forward(logits, inputs)

Apply RegularizationLoss to input.

Parameters:
  • logits (torch.Tensor) – Desired outputs of model

  • inputs (torch.Tensor) – Model Input.

Returns:

Regularization Loss for this sample.

Return type:

torch.Tensor

training: bool

Module contents