node provides the ability to define a model of your site in terms of different classes of pages with information as to what types pages are permitted inside other pages types, along with information about what regions of content are supported on each class of page.
Once you have defined this model and associated each class of page with view templates then node compiles this into a functioning website with the corresponding edit interface.
The node model supports an arbitrary number of users (we have production sites with more than 60,000 registered users). Each user can be a member of multiple Access Control Lists (ACL) and every action that can be performed on the node system is also associated with a single ACL. This means that one can create a security model that is as fine-grained or sweeping as required and assign and remove rights from users as required.
Once you have a page hierarchy of different page types then you can define the actions that can be performed on the page types. Each action has an ACL associated with it and can also be restricted to a given page type or types. node automatically comes with some basic default types (create page, edit page, save edits, undo edits, delete page etc etc) but you can extend this list with custom functionality for custom page types for bespoke sites.
node supports the idea of a ContentCompiler which is a module which is good at modelling a given type of information which can then be associated with a page of a given type. It then becomes responsible for modelling, displaying and editing this information on this page. Any given page type can have an arbitrary number of named ContentCompilers registered on it thereby letting you build pages out the elements that are required to model and represent the data that it represents.
This content could be arbitrary "content" - ie human readable information or images. However they can also represent meta-data about the page - so a page of type Feed Item will have a ContentCompiler that stores the start times, end times, time model etc etc that will tell the listing pages how to give a sensible chronological ordering of the pages.
If one was building a bespoke site then this will generally involve creating custom ContentCompilers to represent the real-world data that a page represents - eg if one was making a music festival website, then one might have ContentCompilers for Works, Performances, Artists, Venues etc etc and these would store both the meta-data about the items, but also the relationships between the different datatypes. In this way one can map an arbitrary many-to-many relationship onto a hierarchy of URLs while maintaing the advantage of being able to attach richly formatted supporting content to the dataset.
All ContentCompilers are responsible for declaring what information that they have which is indexable. A ContentCompiler may choose an arbitrary division of this content into fields, and can supply it as text or as numerical, boolean or chronological data. This is then stored in shared abstracted search index which can be accessed as a whole to give you a full-text search or targeted by field values and page types to give the ability to build advanced searches that cross multiple sub-systems (and therefore backing data tables) without having to worry about the integration.
Indexing is restrict-able on a page type basis or on a page by page basis depending on the site configuration so you can define "dark areas" that are excluded from search results. This index is automatically rebuilt as you commit changes to the Node table, and this rebuilding takes place in a lower priority background thread to maximise the responsiveness the the updating thread.
The indexing system is modular and can be extended with new data type handlers if your site represents data that is of type that hasn't need to be indexed before while still preserving the relationships with any existing indexed data.