>>27
The file system is an artifact of the limitations of early processors
Yes, but we're stuck with it because it's what most are used to. The hierarchial file system is simple and robust. It's pretty tedious to serialize and deserialize in some environments, but in shell scripts that treat files almost as variables it's very easy and suitable to use. Also, it's pretty easy to find a way to organize stuff into a hierarchy, as you say in the following.
A heirarchical name space is still required for users to keep track of where they put things but virtual memory can hold that tree structure just as well as inodes can and the judicious use of separate memory spaces can side-step a 4 gigabyte limitation for storage on 32-bit machines.
Even assuming that all files on the machine would fit in a single 32-bit address space, I wonder where do you put the boundary between program data and user data/content? How do I share an image between two programs? Is it so that instead of using files (with the implication that the data is stored on a disk) I have to juggle address spaces (that contain the image in some format) that different programs can map into or out of memory? Or is there no way to separate a program's data (runtime data, code, etc.) and content? Secondly, how would you ensure consistency?
If we're still talking about lisp OS, then I think I see your point. In that case the data would always be in native lisp data structures that every program on the machine understood and which could be passed from program to program as easily as text streams are piped on unix. The operating system would somehow automagically map or copy data structures and serialize them at times of need in the same way processes are currently swapped out.
I have a suggestion (that isn't new), however, for a lisp OS that doesn't go this far, but still throws out the file system. Some operating systems have used relational databases instead of, or in conjunction with a file system to some success. But instead of a full blown relational database, what about using tuple spaces for durable storage and IPC? In processes
current-input-port and
current-output-port (or equivalents) would be bound to a tuple space from where it could read and write tuples (i.e. lists). Writing a tuple could be done with just plain old
write, but as with tuple spaces in general there would be different
read procedures for reading or taking a tuple, or getting any or all the tuples that match a certain wild card.
There could be private tuple spaces for programs to store things like settings and other configuration, and shared ones for what would generally be stored on the file system. Tuple spaces were invented to tackle concurrency and parallellism and all the operations on them would naturally be atomic. Finally, there could be distributed tuple spaces for a network of machines where any program with access to the tuple space could read or contribute with data.
Maybe there could be pipes as well.