Endless backward compatibility is unrealistic and there is a point where providing some aspect of it costs a company more than it's worth in user retention. Should Microsoft or anyone else really have to deal with all the heartache of enabling and supporting 16-bit software on a 64-bit OS at this point in time, especially when there are solutions like virtualisation and containers? Windows does a better job than almost anything bar certain big-iron mainframe operating systems in this respect.
It doesn't take any "extra" work at all to enable software that uses a smaller memory schema to work on an OS that can use larger memory schemas, as long as the OS is designed from the ground up not to isolate those schemas into separate silos. It was predictable at the time when Office 97 was released that eventually more addressable memory would be needed, so it could have been planned for at that time. In fact, OSes at that time could address locations on much larger hard drives than were available for sale. I don't think it magically takes fewer bits to record the location of something out beyond 4 GB a hard drive than it takes to record the location of something beyond 4 GB in memory. OSes are not designed that way as deliberate planned obsolescence, so vendors can extract more money from customers. There is no techical issue with this at all.
Using console vs graphical software is a more difficult problem, but it's not the same as 16-bit vs 32-bit or 64-bit.
As I noted in my previous reply, running software that is not "NT-aware" on later OSes does result in problems, many, but not all, of which can be corrected with registry hacks.
64-bit is not necessary or helpful for most line-of-business applications. Instead of trying to convince people to accept it as the default, it should be treated as a specialty option for specific uses, such as high-capacity servers.
Ken Dibble www.stic-cil.org