Turns out that this is an unknown issue with encfs (and/or leveldb).
I pushed a change to only store the wallet data in encfs, but use the default directory (still configurable) for the rest.
As I nuked all data, it is currently re-syncing the whole chain. So I'm going to revisit this when sync is done.
OK, I also just confirmed that this is happening with a ulimit set correctly in a shell session when running the exec command manually. So the issue is something else.
Unfortunately, there's nobody on the Web who had the same issue with this program apparently. :/
I found out that limits from limits.conf don't apply to systemd services at all, but it still doesn't explain why opening a shell for that user still shows the default limit when there are entries in the config file that change it.
I used that exact code when trying it with that resource. The question is not so much about that resource as about none of the various methods working at all.
Hmm:
Note that most process resource limits configured with these options are per-process, and processes may fork in order to acquire a new set of resources that are accounted independently of the original process, and may thus escape limits set
So maybe it's worth fixing the ulimit after all.
Unfortunately, the current ulimit cookbook and resource didn't work for me. When I opened a shell as the satoshi user, it would always show me the default value again. Same when I tried setting it in /etc/security/limits.conf.
I also set a high value in /proc/sys/fs/file-max and ran sysctl -p to apply it, and that also didn't work.
I must be doing something terribly wrong for all the normal methods to fail like that.
Damn, after a while of indexing, it runs into the same issue. Goes something like:
LevelDB read failure: IO error: /mnt/data/bitcoin/chainstate/1629202.ldb: Too many open files
I have already upped the ulimit for open files to unlimited in the systemd file (and tried doing it via ulimit before). But somehow it seems to be stuck at the 1024 default, no matter what I do.
I'm wondering if there's a different issue, and this error message is just a symptom. In any case, I'm completely stuck now. Already spent hours trying to fix it.
Status update:
I wrote a whole new source recipe yesterday (see latest commit), and got everything working as intended. However, I ended up with a leveldb issue when starting bitcoin, which I wasn't able to fix. So I have it re-indexing the whole chain database now.
For some reason it was just using nginx defaults, which is always extremely low for upload size. Changed it in production, PR incoming.
Update: it seems to happen for files past a certain size limit. But instead of complaining about the file being too large (which is a bit silly at 2MB e.g.), it just does nothing.