![]() Write protecting the kernel read-only data: 12288k ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata4: SATA link down (SStatus 0 SControl 300) ata3: SATA link down (SStatus 0 SControl 300) ata2: SATA link down (SStatus 0 SControl 300) ata1: SATA link down (SStatus 0 SControl 300) pata_acpi 0000:0b:00.0: PCI INT A disabled pata_acpi 0000:0b:00.0: setting latency timer to 64 ata6: SATA max UDMA/133 abar port 0xfbff4380 irq 47 ata5: SATA max UDMA/133 abar port 0xfbff4300 irq 47 ata4: SATA max UDMA/133 abar port 0xfbff4280 irq 47 ata3: SATA max UDMA/133 abar port 0xfbff4200 irq 47 ata2: SATA max UDMA/133 abar port 0xfbff4180 irq 47 ata1: SATA max UDMA/133 abar port 0xfbff4100 irq 47 The output of dmesg | grep ata | head -n 200 after setting bios to ahci and having to boot without those two discs. The output of dmesg | grep ata was very long so here is a link: I/O size (minimum/optimal): 512 bytes / 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Result of sudo fdisk -l as you can see sda and sdb are missing. # This file was auto-generated on Tue, 19:53:56 +0000 # instruct the monitoring daemon where to send mail alerts # automatically tag new arrays as belonging to the local system # auto-create devices with Debian standard permissionsĬREATE owner=root group=disk mode=0660 auto=yes alternatively, specify devices to scan, using # by default (built-in), scan all partitions (/proc/partitions) and all # Please refer to nf(5) for information about this file. Mdadm: No md superblock detected on /dev/sde. Mdadm: No md superblock detected on /dev/sdd2. Mdadm: No md superblock detected on /dev/sdd1. Mdadm: No md superblock detected on /dev/sdc5. Mdadm: No md superblock detected on /dev/sdc3. ![]() Mdadm: No md superblock detected on /dev/sdc1. Mdadm: No md superblock detected on /dev/sdb. Result of sudo mdadm -examine /dev/sd* | grep -E "(^/dev|UUID)" mdadm: No md superblock detected on /dev/sda. Has anyone else had experience with using this to recover software raid 1 data? I have read about ' test disk' and it states on the wiki that it can find lost partitions for Linux RAID md 0.9/1.0/1.1/1.2 but I am running mdadm version 3.2.5 it seems. I would be happy just recover the data if that is a possible alternative to rebuilding the array. When I boot from recovery mode I get a zillion 'ata1 error' codes flying by for a very long time.Ĭan anyone let me know the proper steps for recovering the array? Mdadm: No arrays found in config file or automatically OS installed on third drive (64GB ssd) (many linux installs) Using disk utility, I could see that the drives are /dev/sda and /dev/sdb so I tried running sudo mdadm -A /dev/sda /dev/sdb Unfortunately I keep getting an error message stating mdadm: device /dev/sda exists but is not an md array Now that I have re-plugged in my drives, the software raid 1 array is no longer being mounted/recognized/found. Today I plugged in another hard drive and unplugged my raid drives to ensure when I wiped the drive, I would not accidentally pick the wrong drives.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |