This was an interesting development today. In my lab, all of the VMDKs are thinly provisioned. iSCSI connects the backend Synology to the ESXi host (6.5.0 Update 1 - build 7526125). The NAS is VAAI compatible, has been completely reliable, stable, isn't overworked or overtaxed, and has TBs of free space if I need to allocate more.
Prior to messing with the lab, I used the web interface to take a snapshot of the systems I want to work on so I don't have to rebuild or undo the damage if I make a mistake. A very typical use-case. The datastore has .5TB allocated to it and was sitting at 176GB in use of 500GB. After two days of working on 5 VMs (5 VMDKs), it was time to delete the snapshot and commit the changes. I did so, one at a time.
What I didn't realize was as the snapshots were being deleted, all of the VMDKs became fully allocated. By some miracle, there was 26MB remaining out of 500GB possible after the last disk finished. I don't understand why this happened today.
I'm aware I can fix this by taking the affected VMs offline and using vmkfstools to clone the VMDKs to another datastore and move it back - but it's an annoying process that should have never happened in the first place.
Any ideas?