1. REPLs allow you to redefine functions. In early REPL languages based on source code in files, it is legal to have more than one definition of a function. Files are read from the top to the bottom (not an astounding observation, this is not “a trick question”), which means that later definitions override earlier definitions. [Aside: One thing we've learned is that using stacks/queues is often better than overwriting static cells. Should function definitions be read in in a stack-based manner? I don't know what this means, nor if it would be helpful, but it is an intriguing bauble to consider].
2. REPLs allow you to redefine top-level variables.
3. REPLs allow you to redefine macros.
4. Early REPL-based languages did not have "classes". Classes are a manifestation of static-compilation thinking. The idea of redefining classes on-the-fly is problematic. Lispworks backed that concept into the system and continues to work differently if you futz with classes from inside the REPL.
When you've defined a class ...
Can you remove one field?
Can you add one field?
Can you change the type of one field?
Can you redefine the whole class?
Do you re-type-check a class definition each time it is tweaked?
Do you re-type-check the whole system once any class definition is tweaked?
Prototypal inheritance probably deals with these concepts in a more REPL-like manner. What does it mean to type-check something in a REPL environment, where you can change a user-defined type on the fly?
Programming languages are just IDEs for creating programs. All programming languages can be interpreted and run dynamically, but, only some can be compiled. Maybe REPLs should be parts of modern IDEs, but, type-checkers should be moved to a later part of the workflow?
[disclaimer: My views are strongly coloured by the fact that I don't think that type-checking should be a part of a programming language. Type-checking, if used, should be a linter bolted onto the side of some code, not tangled up in the code itself. In analogy: OhmJS gets this kind of thing right, keeping semantics separated from grammar, retaining locality of reference and purity of grammars. Furthermore, consider what the point of type-checking actually is. Type-checking was a nothing-burger in the days of 7-line BASIC programs. Type-checking only became an issue when programs became too large to fit in one eye-full. Finding better notations for programming - making sure that units are completely stand-alone and fit in a single eye-full might obviate the need for type-checking (!). This kind of stand-alone isolation doesn't happen with current programming languages due to method overriding. In the least, visible annotations of type-checking might be disappeared. One might want to infer types, for doing analysis and inferencing, which is something not actually very-needed for simply creating programs that work. Type-checking is a useful tool, but not an absolute requirement. Premature type-system-building is about as bad as premature optimization, IMO.]
See Also
Email: ptcomputingsimplicity@gmail.com
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: tarvydas.gumroad.com
Twitter: @paul_tarvydas
Substack: paultarvydas.substack.com
FORTH had a [now] unusual approach to re-defining in a REPL: it'll only affect future code, all previous code keeps using the previous definition.
This was kinda incidental side effect of how it's implemented [words resolved to pointers at parse time; definition prepends new (name,value) pair to dictionary list, which can shadow old name on lookup], but Charles Moore argued that's an important feature — re-definition can't accidentally break (nor fix) existing code. [https://www.forth.org/POL.pdf, 3.6.2 + 4.4.1 where he even suggests sacrificing ability to recurse for re-definition to be able to refer to the old definition]
Formally, you can think of old and new definitions as separate things, foo, foo', foo''... despite all spelled foo in the source — it's just that you open a new scope and the new definition shadows the one from previous scope.
(Similar to how Elixir allows variable "rebinding" sugar, but each binding is immutable.)
I think that way there are no problems with type checking — no problem with foo, foo', ... each having its own type.
FORTH also allows rolling back a definition, together with _all_ later definitions(!)
So in that sense its dictionary works like a stack — MARKER or FORGET can reveal previously shadowed old definitions. [https://forth-standard.org/standard/tools/FORGET]
---
But I think consensus of pretty much all newer REPLs is that being able to "break" & "fix" prior code is a feature not a bug — editing a lower-level function _should_ affect all higher-level functions that refer to it! Which brought all the questions you're listing.
---
P.S. You've probably seen https://www.dreamsongs.com/Files/Incommensurability.pdf going deep into how Lisp and CLOS actively cared about on-the-fly class changes (but nothing on type checking). However that's just background to his main question whether later researchers misunderstood those due to "speaking a different language".