Take a look at Sage's syntax and its leveraging of Python -- that is very Maple-like. The fundamentals of the language (Python + Sage extensions) are quite close to Maple's, as is the paradigm of using an interpreted language to 'script' a set of fast base functions. While the Sage developers may now be influenced by other systems, their basic system is essentially a copy (at the design level) of Maple, whether they like it or not. Personally, I find this very disappointing, since we've learned a lot about how to build CA software in the last 25 years!
Current research in parts of Physics does strike me very much as weakly typed. This certainly allows you to experiment a lot more, since you don't need to be concerned with small stuff like "making sense"...
Over the years, I have seen the benefits of both certainty (through solid foundations) and free-for-all experiments. I am interested in building something which allows both. And, by the way, the only 'solid foundations' systems (as far as I am concerned) that are 'real' are Coq and Isabelle; there are a number of smaller systems with solid foundations, but their libraries are too small to be interesting. Of course, both those systems are notoriously hard to use as well as being quite slow for doing computations.
The degree of typing picture you refer to is quite good. To be more precise, at 'static' typing, which means typing at 'compile time'. The languages I like these days (Haskell and O'Caml) would fit best in the extreme upper right corner (although, to be fair, Agda 2 would be even further out). Maple would fit right in between Tcl/Perl and VB, fairly high up [not always a good thing]. That Mupad is more typed than Maple and Axiom further still is indeed true. Mathematica is essentially the same as Maple, as is Macsyma on the 'types' scale. The Wikipedia article is very misleading because it does not differentiate between static and dynamic typing! On the dynamic typing scale, Maple is extremely strongly typed -- its type system is Turing-complete!
As far as denotational semantics is concerned, you are correct, it does seem backwards! But that is the fundamental issue with symbolic computation -- it is all about intensional statements being interpreted as if they were actually extensional. People use Maple as if it were about actual mathematical objects, while what Maple really does is 'symbol shuffling'. You need strong reflection theorems to show that these coincide. And we know that these indeed coincide for first-order logic and algebra, and these theorems are 'unknown' for analysis and geometry (even though lots of counter-examples are well-known to naive reflection theorems).