Skip to content
This repository was archived by the owner on Mar 20, 2026. It is now read-only.

Commit 1f5b414

Browse files
cndnfacebook-github-bot
authored andcommitted
Support Latent Variable Model in base training (#879)
Summary: Pull Request resolved: #879 Pull Request resolved: pytorch/translate#598 Details in https://fb.workplace.com/notes/ning-dong/closing-research-to-production-gap-a-story-of-latent-variable-model-migration/443418839813586/ Reviewed By: xianxl Differential Revision: D15742439 fbshipit-source-id: 168c84bd30a5da3c2fb404fcca74266deef1f964
1 parent e46b924 commit 1f5b414

1 file changed

Lines changed: 2 additions & 1 deletion

File tree

fairseq/modules/learned_positional_embedding.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,8 @@ def forward(self, input, incremental_state=None, positions=None):
3636
if positions is None:
3737
if incremental_state is not None:
3838
# positions is the same for every token when decoding a single step
39-
positions = input.data.new(1, 1).fill_(self.padding_idx + input.size(1))
39+
# Without the int() cast, it doesn't work in some cases when exporting to ONNX
40+
positions = input.data.new(1, 1).fill_(int(self.padding_idx + input.size(1)))
4041
else:
4142
positions = utils.make_positions(
4243
input.data, self.padding_idx, onnx_trace=self.onnx_trace,

0 commit comments

Comments
 (0)