?

Log in

No account? Create an account

With your permission

« previous entry | next entry »
Jun. 26th, 2007 | 06:27 am
mood: geekygeeky

So many misconfigurations, random failures, data losses, or mysterious bugs could be mitigated by one simple feature, which I propose should be intrinsic to a filesystem: a flag for whether or not a folder can be altered by automated processes. Really what I'd like to get to is to be able to distinguish things that happen because of a command I typed in (my direct intentions), a side effect of a command I typed in, and things a computer software agent is doing on my behalf, and allow or disallow these per-folder.

Right now this can be done by agents designed for it if they deliberately run without root privileges and fail noisily enough when they can't do something as a result. But I'd like to be able to cause this behavior in something that isn't designed for it. The middle case is addressed by things like bash's 'noclobber'. Hmm, maybe make it owned by root, but with permissions d---rwx--- and owned by a special group? And conversely, folders owned by user 'nobody', group 'interactive-user', and permissions drwx---rwx?

I think Gödel's Incompleteness applies here somewhere. I'll have to do some experimenting.

Note that my scheme would mandate two sorts of /tmp folder: one that the system uses, and one that a user can manipulate.

It would be interesting to have a way of encoding *why* some filesystem permission is not granted.

Link | Leave a comment |

Comments {3}

(Deleted comment)

Triple Entendre

(no subject)

from: triple_entendre
date: Jun. 26th, 2007 11:23 pm (UTC)
Link

Wow, sort of the ontological opposite of taking off and nuking the site from orbit!

I actually used to do something like this with a program called GoBack on Windows98. It gave the entire filesystem a "rewind" button.

In theory, you could do this on Linux with something like lvm snapshots.

Reply | Parent | Thread

Ephemera

(no subject)

from: tawnygnosis
date: Jun. 26th, 2007 03:14 pm (UTC)
Link

That sounds very logical to me, is that not the way things are today? Are the primary hardrives and such of computers tagged as unalterable in any way?

Reply | Thread

Triple Entendre

(no subject)

from: triple_entendre
date: Jun. 26th, 2007 11:18 pm (UTC)
Link

It's mostly that way today, but I'm looking for a few specific cases that aren't covered well. Things like, if I'm logged in and checking my email, that's great, let me delete stuff in MyEmailFolder/. Now let's say I'm not logged in, but there's some automated program that runs late at night tidying stuff up on my behalf. I'd like to be able to set file permissions such that that program can't touch MyEmailFolder/.

Now this automated program has a folder for its temporary storage, called Stuff/.
It needs to be able to make files in there and delete them when it's done; whatever it wants. Right now, I could save some file in there, thinking I was going to keep it, and the automated program could wipe it out later. I'd like the filesystem to stop me from putting a file there in the first place. But I *would* like to be able to write programs that could put files there.

It's easy to make whole filesystems read-only and stuff; it's just hard to handle cases where manual AND automated things could overlap. Linux distros are getting worse and worse about this with multiply-sourced config files that aren't really used, instead using a dynamically created one based on blah blah blah.

Sorry, I have to stop, it's taking too long to write something sensible about this without making up examples that don't quite work. I'll post more if I figure out more about it. :)







Reply | Parent | Thread