FORTH had a [now] unusual approach to re-defining in a REPL: it'll only affect future code, all previous code keeps using the previous definition.
This was kinda incidental side effect of how it's implemented [words resolved to pointers at parse time; definition prepends new (name,value) pair to dictionary list, which can shadow old name on lookup], but Charles Moore argued that's an important feature — re-definition can't accidentally break (nor fix) existing code. [https://www.forth.org/POL.pdf, 3.6.2 + 4.4.1 where he even suggests sacrificing ability to recurse for re-definition to be able to refer to the old definition]
Formally, you can think of old and new definitions as separate things, foo, foo', foo''... despite all spelled foo in the source — it's just that you open a new scope and the new definition shadows the one from previous scope.
(Similar to how Elixir allows variable "rebinding" sugar, but each binding is immutable.)
I think that way there are no problems with type checking — no problem with foo, foo', ... each having its own type.
FORTH also allows rolling back a definition, together with _all_ later definitions(!)
But I think consensus of pretty much all newer REPLs is that being able to "break" & "fix" prior code is a feature not a bug — editing a lower-level function _should_ affect all higher-level functions that refer to it! Which brought all the questions you're listing.
---
P.S. You've probably seen https://www.dreamsongs.com/Files/Incommensurability.pdf going deep into how Lisp and CLOS actively cared about on-the-fly class changes (but nothing on type checking). However that's just background to his main question whether later researchers misunderstood those due to "speaking a different language".
FORTH had a [now] unusual approach to re-defining in a REPL: it'll only affect future code, all previous code keeps using the previous definition.
This was kinda incidental side effect of how it's implemented [words resolved to pointers at parse time; definition prepends new (name,value) pair to dictionary list, which can shadow old name on lookup], but Charles Moore argued that's an important feature — re-definition can't accidentally break (nor fix) existing code. [https://www.forth.org/POL.pdf, 3.6.2 + 4.4.1 where he even suggests sacrificing ability to recurse for re-definition to be able to refer to the old definition]
Formally, you can think of old and new definitions as separate things, foo, foo', foo''... despite all spelled foo in the source — it's just that you open a new scope and the new definition shadows the one from previous scope.
(Similar to how Elixir allows variable "rebinding" sugar, but each binding is immutable.)
I think that way there are no problems with type checking — no problem with foo, foo', ... each having its own type.
FORTH also allows rolling back a definition, together with _all_ later definitions(!)
So in that sense its dictionary works like a stack — MARKER or FORGET can reveal previously shadowed old definitions. [https://forth-standard.org/standard/tools/FORGET]
---
But I think consensus of pretty much all newer REPLs is that being able to "break" & "fix" prior code is a feature not a bug — editing a lower-level function _should_ affect all higher-level functions that refer to it! Which brought all the questions you're listing.
---
P.S. You've probably seen https://www.dreamsongs.com/Files/Incommensurability.pdf going deep into how Lisp and CLOS actively cared about on-the-fly class changes (but nothing on type checking). However that's just background to his main question whether later researchers misunderstood those due to "speaking a different language".