-
Notifications
You must be signed in to change notification settings - Fork 46
Description
Thanks for added the generative functionality! Is there a bug or am I doing it wrong?
See command and output below (test dataset after encoding and training as per README)
➜ transformer-lm git:(master) gpt-2-gen tests/shakespeare-test-run "Artificial intelligence"
loading model from tests/shakespeare-test-run
generating text for prefix Artificial intelligence
Traceback (most recent call last):
File "/Users/.../anaconda3/bin/gpt-2-gen", line 11, in
load_entry_point('lm', 'console_scripts', 'gpt-2-gen')()
File "/Users/.../transformer-lm/lm/inference.py", line 120, in fire_gen_main
fire.Fire(only_allow_defined_args(gen_main))
File "/Users/.../anaconda3/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
component_trace = _Fire(component, args, context, name)
File "/Users/.../anaconda3/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire
component, remaining_args)
File "/Users/.../anaconda3/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable
result = fn(*varargs, **kwargs)
File "/Users/.../transformer-lm/lm/fire_utils.py", line 30, in _return_wrapped
return function_to_decorate(*args, **kwargs)
File "/Users/.../transformer-lm/lm/inference.py", line 116, in gen_main
tokens_gen = mw.generate_tokens(tokens, tokens_to_generate, top_k)
File "/Users/.../transformer-lm/lm/inference.py", line 86, in generate_tokens
ntk = self.get_next_top_k(tokens, top_k)
File "/Users/.../transformer-lm/lm/inference.py", line 74, in get_next_top_k
next_log_probs = self.get_log_probs(tokens)[-1]
File "/Users/.../transformer-lm/lm/inference.py", line 51, in get_log_probs
assert len(tokens) <= self.model.hparams.n_ctx # TODO
AssertionError