Relating units can kick off a lengthy sequence of hooks, and the services in the juju environment may not be usable until they have completed running. However, as far as you can tell from the output of juju status, the system is up and running as soon as the initial relation-joined hooks have run despite the fact that the services can still be in flux. Even in the simplest case, a relation-changed hook may be running despite 'juju status' telling us everything is live.
In order to be able to automate the use of juju, such as test suites or deployment wrappers, we need to be able to tell when the system has reached a stable state. I would define this stable state as when 'there are no running and no pending hooks'.
Stable state could be exposed at the unit level, in which case automation tools would aggregate this for all related units in the environment and know when it is safe to proceed and actually make use of the deployed system. Alternatively, juju could do this aggregation and in addition expose stable state at the service level (all units in this service and all related services are stable, recursively) and at the environment level (all units in all services are stable).
For what it is worth, I've now seen at least four implementations of 'wait for steady state' and all of them are broken the same way. Amulet has a particularly convoluted work around, involving installing subordinate charms everywhere, but it still cannot wait for 'steady state', just when a particular state is reached that might happen to be steady.
There is clearly need for this single bit of environment to be exposed, and it is impossible to write stable integration tests without it because it is impossible to know when an environment you have setup is in the state your tests require.