We are running into an intermittent issue w/ our SQL Server running SQL2016. It's not that our tables are all that large (usually), but our server is trying to manage around 550 different DB's parked on it. Few of the DB's are open at any one time, but the server has to keep track of them at least to some extent.
When the problem arises our dev team runs into timeouts and sometimes the problem goes away, some time it doesn't and our IT guys out of desperation restarts the machine.
The limit is much higher, indeed. For SQL Server the number of databases is also less a performance issue than many assume. Sql server stats should give you an indication of what goes wrong. With lots of waits on the network interface (outgoing, not the SAN one), you could have one of two issues:
You are transferring too much data, doing SELECT * queries when you only need a few columns, and such. If your server is connected to a single GBit network port and you have 1000 clients, that's 100 KB/sec maximum transfer rate per client while at the same time Sql server is serving data from memory a magnitude faster.
You have high latency on the network such as a VPN, a WAN, a client using Wifi, a defective switch or a misconfigured network with ARP floods, etc.
There another common scenario not related to network waits. The server slows down massively, but doesn't seem to consume any significant resources. This is usually due to locking issues with a larger number of users. In this scenario the application typically behaves great for the developer who is testing on their own, but not all great when many users start working.
I gave a session on these kind issues a few years ago at Southwest Fox.