mount: you must specify the filesystem type

Latest response

A Red Hat Enterprise Linux 6.2, when abnormal occurred, I input any command ,return bash Input/output error. It may filesystem error.
After reboot linux, it can not startup normally, the screen print:
mount: you must specify the filesystem type
mount: you must specify the filesystem type
mount: you must specify the filesystem type
........

Kernel - panic - not syncing: Attempted to kill init!

How can I debug this error and resolve this issue?

Thanks.

Chen Long

Responses

Hello, It appears that your /etc/fstab is not configured correctly, or not available. In this particular case, you may need to boot from rescue media and manually mount the partition (or volume/filesystem).

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ap-rescuemode.html

An example of what your /etc/fstab should resemble

/dev/rootvg/home   /home      ext4    defaults,auto_da_alloc      0  2

As you go through the recovery, you will discover if you have an invalid filesystem (corruption) or one that needs maintenance.

Thank you for your reply. Yes, I am now in the rescue mode. When the rescue environment attempt to find my linux installation, I was prompted that I don't have any Linux partitions. Go into the shell, I can't see any my file system define in the file /etc/fstab. Currently, The fstab file is following:

rootfs / rootfs rw, relatime 0 0
/proc /proc proc rw,relatime 0 0
/dev /dev tmpfs rw,seclabel, relatime 0 0
/dev/pts /dev/pts devpts rw,seclabel, relatime, mode=600,ptmxmode=000 0 0
/sys /sys sysfs rw.seclable, relatime 0 0
none /tmp tmpfs rw,seclable, relatime, size=256000k 0 0
/dev/loop0 /mnt/runtime squashfs ro,relatime 0 0
/selinux /selinux selinuxfs rw,relatime 0 0.

In lvm, I run vgs, pvs, lvs, I can see these partition define, but lvscan ruturn:

lvm> lvscan
inactive                     '/dev/vg_name/lv_root'   [50.00 GiB] inherit
inactive                     '/dev/vg_name/lv_swap'  [33.50 GiB] inherit
inactive                     '/dev/vg_name/lv_app'     [415.00 GiB] inherib

How can I fix this issue? Need I reinstall my Linux ?

Thanks again.

run the following and reply with the results - it looks like your data is still there (which is good news ;-)

vgchange -ay vg_name
cd /dev/
for LV in `find vg_name/*`; do lvchange -ay $LV; done
mount /dev/vg_name/lv_root /mnt
cat /mnt/etc/fstab
parted -s /dev/sda print

Thank you.

bash-4.1# mount /dev/vg_oasrv/lv_root  /mnt
mount: you must specify the filesystem type.
bash-4.1# mount -t ext4 /dev/vg_oasrv/lv_root  /mnt
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_oasrv-lv_root,
              missing codepage or helper program, or other error In some cases userful infois found in syslog  - try dmsg | tail or so.

Did you try lv_app also?

file -sL /dev/vg_oasrv/lv_root

There is still a possibility your data is there, but this is a fairly advanced recovery at this point and I don't want to risk providing invalid instructions and causing data loss for you. There are ways to recover alternate superblocks and some other tricks. I would recommend opening a case and let the folks that do this often assist you https://access.redhat.com/support/

You can try searching our knowledge base and solutions docs to see if there is one that matches your situation.

Here is an example (which may not apply to you): https://access.redhat.com/solutions/35340

Yes, I can mount lv_app correctly and view the directory tree in lv_app except lv_root. But in the directory tree, the owner name for these files or directories is missing instead of its groupid and userid, like the following:

-rwxrwxrwx       1     601   600    5436278   2016-08-01 22:01   filename

whether if I should reinstall my Linux OS and migrate lv_app, or continue to recovery ?

Thank you for your help.

If you are seeing the UID and GID while booted in rescue mode, that is OK (and sometimes expected) that likely means your installation had a user configured that the rescue image does not - and that is not a big deal.

So - it seems as though you corrupted your lv_root. There are additional steps that you can take to try and recover, but like I mentioned before that is not my specialty and I don't want to cause you data loss (and recommend you open a case if you really want to attempt recovery).

You can re-install and preserve the existing data in lv_app - and that may be the quickest/easiest option for you at this point. You may want to make a back of the data in lv_app before you proceed though. And make certain that you do NOT select "format" for that volume if you do re-install.

You can try doing a fsck dry run and see if it helps give you an result for which the error is .

fsck -n

Additionally you can check blkid output on the server too .

blkid
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.