Hi,
As background, I have ESXi6.5.0u1 running. My first datastore is a RAID1 consisting of 2 SSDs, for ESXi to live on (I know, I could have done that on USBs, but it's work). The second datastore consisted of 8x 1TB SSDs, in RAID6. All disks are connected via an LSI Megaraid SAS Card. This ran fine, having been updated from 6.0 to 6.5 in the past few years.
After a public holiday, I returned to find the array showing 2 disks failed in the RAID6. I had those replaced, and in the rebuilding process, a third disk failed. Using the RAID card's BIOS, I marked the two new disks "good", left it to rebuild, replaced the third disk, left it to rebuild, then rebooted into ESXi. At no point did I delete the datastore, or any VMs, or the array. No suprises, my datastore doesn't mount automatically, but the volume does appear when I run "esxcli storage vmfs extent list" (with correct volume name, extent number 0, partition 1).
I tried to check metadata with "voma -m vmfs -f check -d", which complains of 10 total errors found;
- resourcesPerCluster "0"
- clustersPerGroup "0"
- clusterGroupOffset "0"
- resourceSize "0"
- clusterGroupSize "0"
- bitsPerResource "0"
- version "0"
- signature "0"
- numAffinityInfoPerRC "0"
- numAffinityInfoPerRsrc "0"
- Failed to check sbc.sf
- "VOMA failed to check device : Invalid address"
The next step I tried was a "fix", but of course this isn't possible with VOMA v0.7, I'd have to update the host to 6.7 to get that.
What's the collective thought, is my datastore toast? Is it worth updating to 6.7 to run VOMA in fix mode? Should I just put the beast out of its misery, blow the array away, and start again, restoring whatever backups I've got?