SAN HeadQuarters allows for real-time and historical data analysis of group performance. For complete information about installing and using SAN HeadQuarters, see the SAN HeadQuarters Installation and User Guide. For information on analyzing data, see Analyzing Storage Performance. Analyzing this data can help you improve performance and more effectively allocate group resources. It gathers and formats configuration and performance data into easy to view charts and graphs.
Would prefer to use some troubleshooting commands on the CLi to see what is really going on under the s/w that reports on SNMP direct from the SAN.SAN HeadQuarters is a performance monitoring tool that enables you to monitor multiple PS Series groups from a single graphical user interface (GUI). If anyone out there has a good source of CLI commands for the Dell EqualLogic controllers I'd appreciate it. Also pinging of the controller management interface is now <1MS Something to note if we have this again. No failures (the VCSA VM failed every day since the issue occurred, but is now completing successfully) as well as during the backup of the mailbox server the mailbox stores were being failed over on the DAG (no longer happening).
Now we are running on CM0 the Veeam backups are all back up to full speed. I assume that this was during the reboot of the interface. The error light lit red on CM1 for a second, but then went green again. Within a flash the controller had handed over to CM0 (which was now active) whilst CM1 rebooted.
I connected a PC directly to the active controller (CM1) via a serial cable and used putty to connect to the interface and sent the restart command. I find it very unusual that the SAN was not generating any errors for this and even in Group Manager the system was reporting no issues and all hardware monitoring all ticked green!! Goes to show you cannot believe everything, even if the s/w tells you it is.
Dell support passed me a PDF a few years back with what the settings should be set to in the ESXi enviroment as well as the switches connecting the SAN to the hosts. Thanks for the reply - Delayed ACK settings were already correct. Has anyone else sen this type of unusual activity? Any help is appreciated on this. Controllers, battery statues, hard disks etc all reporting as they should. In the Group Manager all statuses are healthy, there are no alerts on any of the monitored HW. Yesterday during low I/O I did go into the Dell Group Manager and into the Maintenance tab an tried the restart there, supplied the grpadmin pwd and nothing happened, the active controller did not fail over to the secondary?!? Or should I be issuing the restart command from the CLI to fail the active controller over to the secondary? In networking in SAN HQ it shows <1Kb per sec both received and sent for the man int.Ĭan the active controller management int be restarted from the CLI without affecting production? Pinging the IP address of the management card comes back anywhere from 2ms 9ms and up into the high 20ms and into the 30ms and as high as 54ms (mostly in the 20's - 30's). Anything else ping'ed on the same LAN (inc VM's hosted on the SAN) comes back <1ms The only thing showing some latency is the SAN's active controller management interface. SAN HQ is installed on a VM that is sat on the same network as the management interface of the active controller of the Dell SAN.Īll networking has been checked and there is no latency on the network. If the problem persists, contact your Dell support provider for assistance. Check your network connectivity and check that the storage group is online. This condition is likely due to network congestion or failed network connectivity. The SAN HQ server issued SNMP requests at the group, some or all of which failed. Again no errors reporting anywhere except for sporadic as mentioned above and the odd Eql alert of: Always happening overnight whilst backup is happening. Once all back up everything returned to normal and no errors overnight during backup. In the end we bit the bullet, found a maintenance window and shut down the whole lot as well as a power off of the SAN. The only thing that was indicating an issue were generic ESXi errors within the VM environment with a VM rebooting during backup, failing backup and the snapshot being left behind and even the on prem Exchange DAG automatically moving the DB's whilst Exchange was being backed up. There were no errors from the Veeam side of things and nothing showing in SAN HQ. EqualLogics have both been setup correctly as per Dell recommended settings. The system had been stable and working fine for many many months/poss a couple of years.
We have two separate EqualLogic setups on two separate sites and previously on the second site Veeam Backup performance dropped terribly.