Not signed in (Sign In)

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

    • CommentAuthorhcgtv
    • CommentTimeFeb 17th 2006
     
    Hi,

    Been following the development of Plogger for some months and I like the direction of the project.

    I installed Beta 2 on my laptop for testing purposes and await Beta 3 to take it for a spin. I've been hearing good things about templating and the many new features coming soon.

    Now onto the subject of this post, keeping all thumbnails in one directory has limitations. On certain operating systems, when a directory has more than 5,000 files there is a performance hit, not to mention the management of said folder, ie try rm * on it.

    Under the images directory, we have collections and albums, which are subdirectories, an excellent way of doing it. But I don't understand why the thumbs directory is not equally partitioned out.

    I know Plogger falls into the small gallery genre but in today's digital world, taking 1000's of pictures a year is not uncommon. I know from experience, I have over 5,000 photos amassed in two years, so I've already encountered thumbnail issues.

    Thanks.
    •  
      CommentAuthormike
    • CommentTimeFeb 18th 2006
     
    I see your point Bert. Honestly, the thumbnails aren't partitioned into seperate folders because of all added complexity required to do this. Renaming and moving thumbnail folders when you change collection/album names and the cleanup required to prevent detritus build up from long forgotten albums.

    Can you point me to some documentation explaining this performance decrease, I have never heard of that. I can see a performance hit for searching the gallery, but for simply querying and displaying thumbnails I don't think this would have any effect. Correct me if I'm wrong!
    • CommentAuthorddejong
    • CommentTimeFeb 18th 2006
     
    Tmk, a filesystem doesn't have to list all of the files until it finds the one requested, it should just use the file system to locate the file and open it, although there may be some differences between varying filesystems (such as NTFS, NFS, etc).

    However, given your example, performing rm on a directory with 5,000 files in 200 directories would take even longer because it needs to recurse through the directories. The moral of the story? Yes, iterating through 5,000 files takes a long time, but it does so whether or not they are in separate directories, in fact, directories should show a performance hit.

    The question is whether your server needs to iterate through the 5,000 files to find the one it needs. It shouldn't, but there are factors which contribute to this.

    If you can find that documentation, I'd be interested in it too.

    Cheers,
    Derek
    • CommentAuthorhcgtv
    • CommentTimeFeb 18th 2006
     
    Mike,

    In a perfect world, we would all be running XFS or ReiserFS filesystems. But as you know, we can't control where our apps will run and for the most part, hosts run ext2 or ext3 file systems.

    This is a good explanation of the current situation:
    http://answers.google.com/answers/threadview?id=122241

    On my local Debian server running ext3, I did a test by creating a gallery with 10,000 images. The thumbnail directory grew to over 20,000 files, taking the fact that some galleries will create multiple thumbnails per image. The performance issue was very evident in reading and writing thumbnails.

    That's the reason that squid uses a directory tree for it's cache and advises you to mount it as a seperate partition and use the noatime attribute.

    Since Plogger is in it's initial development phase and it's very simple to use, I can see many users loading it up with thousands of images in short order. So I figured I could share my experience to help make Plogger a small gallery app that can leap tall buildings ;)