That your model can be dead wrong due to small differences in initial conditions (🦋 effect) doesn't tell you that small differences in initial conditions are insurmountable, it just tells you that your model might fail to capture how things in the real world self-regulate.
In fact, it assumes that the stability of what you're studying would be expressed with the same vocabulary as your variables. Nerds mistakenly call this "meta-stability"; there's nothing "meta" about it you nincompoop, it's just that what persists isn't captured by your language!