I am training a Multivariate Normal from TFP using a probabilistic model
as shown below:
def NLL(y_true, y_pred):
return -y_pred.log_prob(y_true)
output_dim = 250
model = Sequential([
Input(shape=(5,)),
Dense(tfpl.MultivariateNormalTriL.params_size(output_dim)),
tfpl.MultivariateNormalTriL(output_dim)
])
model.compile(loss=NLL, optimizer=Adam(lr=1e-3, clipvalue=0.25))
The training works using Adam
and clipvalue=0.25
. However, I am still getting very high loss values sometimes when optimizing the model. Is there any useful trick to gain more numerical stability?
Any help is highly appreciated. Thanks
question from:
https://stackoverflow.com/questions/65904164/tensorflow-probability-negative-log-likelihood 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…