Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • ext3 file fragmentation causes performance issue

    Posted on

    I'm sure I'll take a beating for this, but I'll ask anyway. I have files on filesystems that are getting badly fragmented and are causing serious performance problems. Heres an example of a 659MB file:

     

    -rw-r--r-- 1 root     root  659906342 Oct 19 20:13 marktest
     

    root@etvfdpa1:/apps/opt/apb # filefrag marktest
    marktest: 237 extents found, perfection would be 6 extents
     

    I copied this file to another filesystem and it looks like this:

     

    marktest: 6 extents found
     

    That copy took about 3 minutes because the source file was so badly fragmented.

     

    I then copied the same file that was in the filesystem with 6 extents to a third filesystem and it only took 2 seconds. The resulting file also has 6 extents.

     

    This is killing the performance of our application. When we copy the files that our aplication reads to a fresh clean filesystem and read the records from those files, our application runs in 30 minutes as opposed to 3 hours on a filesystem that has the same files but are fragmented.

     

    I have read everything in google about ext3 filesystem do not need to be defragmented, but I have real evedence that this is not the case.

     

    How do I defragment these files? A simple copy to the same filesystem does not work. We also cannot bring the filesystem down and fsck it or whatever because we are constantly processing files all day.

     

    Any help would be greatly appreciated as this issue is killing us.

     

    Mark

    by

    points

    Responses

    Red Hat

    Quick Links

    Help

    Site Info

    Related Sites

    © 2025 Red Hat, Inc.