I notice in current report | Training (with int8) -- | -- Baize-7B | 26GB Baize-13B | 25GB Baize-30B | 42GB 13B models consumes actually less memory than 7B. Is it a typo?