Skip to content

runtime-optimization-of-validated-gpt-free-proof-of-concept #133

@david-thrower

Description

@david-thrower

Kind of issue: [bug | feature-request-or-enhancement | Process Change | Support request]: Final fine tuning to the merge candidate: 233e882

Additional context

Last commit looks like our cold start performance is at parity with GPT2's pre-trained performance. It did run for 2 1/2 hours. Goals here:

  1. Fine tune the model search to a constrained range of at or near optimal values.
  2. Reduce the number of sub-trials and epochs.
  3. Maybe, see if we can get away with increasing the sequence length further and if it is worth it in terms of embedding performance.

Suggested Labels (If you don't know, that's ok): kind/performance kind/hpc hind/scientific

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions