### Proposal As observed accidently on [main and without any feature flag](https://github.com/prometheus/prometheus/issues/14808) we use a lot of memory for `CreatedTimestamp` call for OM text. This is .due to deep coping of [parser on every `CreatedTimestamp`, so technically per every series (!)](https://github.com/prometheus/prometheus/blob/main/model/textparse/openmetricsparse.go#L271). This was a known naive implementation we decided to accept when iterating on this feature in https://github.com/prometheus/prometheus/pull/14356 This issue is to minimize that memory use. We already discussed this, but to put it here: * We should see if we can reuse existing parser (not copy it). * If we have to copy, copy once and reuse a second parser (e.g. reset it's positions). ## Acceptance Criteria * The additional memory use per OM Text scrape with CT feature is minimal (~within +5% max) All of this is a good lesson and argument to improve OM Text format in 2.0 (: Related to https://github.com/prometheus/prometheus/issues/14217