My first try using RAID1 ended with CentOS not starting anymore - throwing me into Recovery mode. Luckily a test system
At first i set-up the raid1 which worked rather good. Later i noticed that my partitions are only 2.2TB of the available 3TB (yet using fdisk's default start/end sector - it didn't use the full HDD is this normal? edit: yes it is / using parted now)
After I noticed my thoughts were 'Yup just remove the RAID and start over again', but it seems like removing a raid is much heavier than creating one. This are just about the commands i used to delete it:
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sdb (which didn't work)
mdadm --zero-superblock /dev/sdc (also wasn't working)
mdadm --remove /dev/md0
umount /dev/md0
Then removed my entry in /etc/fstab. That was the moment i was thrown into Rescue (after reboot certainly) - so I deleted the partitions of the sdb and sdc too but it didn't change.
The errors from 'journalctl -xb' are saying:
Received SIGRTMIN+20 from PID 1694 (plymouthd)
Timed out waiting for device dev-md0.device.
Dependency failed for /data
Dependency failed for Local File Systems
Dependency failed for Mark the need to relabel after reboot.
Job rhel-autorelabel-mark.services/start failed with result 'dependency'.
Dependency failed for Migrate local SELinux policy changes from the old store structure to the new structure.
Job [email protected]/start failed with result 'dependency'.
Dependency failed for Relabel all filesystems, if necessary.
Can't understand why it's still trying to mount /dev/md0 to /data as i removed the fstab entry and the /dev/md0 device isn't available anymore.
Thanks!