Context
From PR #75 review - the demo's test runner has an architectural issue with data consistency.
Problem
PtcDemo.TestRunner.get_last_result/0 (lines 226-230) regenerates fresh random data by calling SampleData functions.
However, the Agent already has datasets loaded in state at init time (agent.ex:70-75) and executes programs against those datasets (agent.ex:110).
This means test validation runs against DIFFERENT data than what the program was executed against, potentially causing false positives/negatives in test results.
Suggested Fix
- Add last_result field to Agent state struct
- Store the execution result alongside last_program in agent.ex:146
- Change get_last_result/0 to fetch the stored result from Agent state instead of re-executing
- Or alternatively, add a public API like Agent.last_result/0 to retrieve the stored result
Impact
- Low severity (tests use constraint-based assertions which are somewhat resilient)
- Affects demo reliability and test accuracy
- Could cause confusing test failures when random data diverges
Files Affected
- demo/lib/ptc_demo/agent.ex (state management)
- demo/lib/ptc_demo/test_runner.ex (get_last_result/0)
Context
From PR #75 review - the demo's test runner has an architectural issue with data consistency.
Problem
PtcDemo.TestRunner.get_last_result/0 (lines 226-230) regenerates fresh random data by calling SampleData functions.
However, the Agent already has datasets loaded in state at init time (agent.ex:70-75) and executes programs against those datasets (agent.ex:110).
This means test validation runs against DIFFERENT data than what the program was executed against, potentially causing false positives/negatives in test results.
Suggested Fix
Impact
Files Affected