When you write any non-trivial software, you have a lot of functions that take their input from outside and provide output to be used in yet another function. A good practice is to specify explicitly the boundary values for all inputs and outputs. In that manner, you will know what values to expect and be able to issue an error or warning message if boundaries are exceeded. If you don't write code to check the boundary, you might end up wasting time due to complicated bugs. Those bugs are especially nasty when they happen at sparse random intervals and at some very deep level.
After you write your boundary checking code, you'll also wish your algorithm to behave nicely even if the input is bad. Let's say you have a model that tries to follow an input reference (desired) signal. The reference signal is supposed to be a sinus with amplitude 1. This means that your expected range is [-1, 1]. Under normal conditions the output will be like this:
If for some reason you get an unexpected value (a division by zero here, a faulty sensor reading there) and you have no safety net in place, your output will become erratic:
As you see, the sudden jump takes time to die out and distorts the system even after the signal stabilizes. The simplest way to take care of unexpected jumps is to go back to the last good (i.e. inside boundary) signal and use that until the signal is valid again:
This simple strategy made sure that there was not too much distortion after the signal resumed to a valid state.
Moral of the story: Always specify the boundaries of inputs and outputs (for large systems in an interface control document). Check them and make sure you have the necessary code guarding against cases where something unexpected happens. Ignore this and you will waste away your life chasing elusive bugs (as has this poor soul countless times at his own peril).