I recently noticed a drive in my home NAS was pre-fail according to S.M.A.R.T. diagnostics, when I replaced the drive I found it difficult to mount the remaining drives in the LVM VG to recover data.
- Use command
lvchange -ay --partial example_logical_volumeto bring a LVM LV with failed disks
The LVM volume that the drive was a member of had three drives. The majority of the data in the group was backed up redundantly to a local and cloud drive using borg backup. There was about 900GB of which was not backed up because it was deemed to be unimportant, certainly not important enough to warrant the cost of backing it up to a cloud server, as well as locally, like the other data. However since the the failing drive was still online I tried to save as much of the 900GB as possible.
I first copied off as much as I could to a spare drive, which was quite successful. During this process, the faulty drive failed completely. This wasn't a problem as I had saved the 900GB and had the backups. The backups had never actually been used, so I did a
borg check to ensure they were consistent. Despite that I was still a little nervous and I wanted to access the two good drives in the LVM volume to pull off the remaining data. I knew this was possible but had never done it before and found it very difficult to find information on the web. Which is why I am writing this small blog post.
In the end it turned out to be quite simple, all that was required was the use of the
--partial flag when using
lvchange. So the command I executed was
lvchange -ay --partial example_logical_volume. This brought the LV back up and I was able to access the data on the two remaining disks. See the lvchange manpage) for more information