Linux Dedicated: Server fails to read/write files if there is too much free space.

  • Pending

I was experiencing problems running a dedicated server on CentOS, and have tracked it down to the drive the server is running from having too much free space. I'm not sure where the cutoff will be, but most likely an overflow somewhere in code.

Symptoms are databundle reading being skipped, errors about reading free space, lack of saving, and a corrupted world.

Here is the thread with more info and troubleshooting.


I'm still testing it more in a VM, but have it working otherwise on the same system with the same files using a smaller loop device of 16gb. In a test VM, making the drive very large brings the error about free space, but it still reads datastores. I tried 20tb, but on reflection, my actual server runs on a 36tb drive, so i'll make the VM drive bigger and report back.

As a note, the server runs fine in a windows VM accessing the (same) 36tb storage, but reports only 4.2gb of free space (supporting my theory of an overflow, just caught here)

EDIT: Tried with a 40tb vhd, still starts after resizing filesystem... not sure about this one. Maybe something specific about the file system?

Steps to Reproduce

Install and run dedicated server in CentOS 8, on partition with a very large size. 

Server fails to start.

extracting the databundles lets server start, but it fails to run properly.

lncluded log is from after extracting databundles.


User Feedback

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now