Bedrock® is an application framework that allows Web technology-based products to be built on top of it. The framework removes the need to build and maintain common application subsystems that deal with things like logging, application auto-scaling, database access, online attack mitigation, system modularity, login security, role-based access control, internationalization, and production-mode optimization.
The core subsystems are foundational to the Bedrock framework and provide the foundation on which other subsystems are built.
A flexible configuration system that provides a robust web server framework. The config system is customizable so that core configuration rules have good defaults, but can be overridden. The system is also extensible so that new config values can be added easily to any subsystem. All config parameters are overwritable, so that projects that use the framework can overwrite the defaults. The config system is also multifile so that subsystems can contain their own logical configuration without having to scan through a large configuration file.
There is a modular design for extending core functionality and web services. Extending the core framework is as easy as overloading a method. Adding functionality on top of the framework is as easy as adding a module to a configuration file and re-starting the system.
The web server framework is capable of automatically scaling up to the number of CPUs available on the system. The cluster module in node.js is utilized to provide this scaling. Worker processes have the capability of being auto-restarted in the event that a worker crashes to ensure that the system can recover from fatal errors.
There is a modularized event API and subsystem that allows the system to publish and subscribe to events across multiple CPU cores/processes.
A logging subsystem is available that is capable of category-based logging (info, log, debug, mail, etc.). The log files follow typical log rotation behavior, archiving older logs. There is also the ability to have a shared log file w/clustered systems and support for multi-file logging.
The persistent storage subsystems are used to store system state between system restarts.
The system has a simple database abstraction layer for reading and writing to MongoDB.
Included is the ability to connect to Redis for simple reading and writing of values to a fast in-memory database.
The ability to create guaranteed unique global IDs (GIDs) is useful when deploying the web server framework on multiple machines in a cluster setting. This allows systems to generate unique IDs without having to coordinate through a central communication mechanism.
The communication subsystems are used to send communication external to the system.
The email subsystem is capable of sending emails via typical SMTP. The emails are template driven in order to support easy customization of email content. The sending of emails is event driven to ensure proper non-blocking operation in node.js.
The security subsystems protect the application against attack or developer negligence.
A simple, pluggable user authentication system is available. A good password-based authentication mechanism is built in. All passwords are salted and hashed using bcrypt password hashing and verification. Logins is session-backed with session state stored in a persistent database (like MongoDB), or in memory if a database subsystem is not available.
An extensible permission and roles system for managing access to resources is available so that there can be a clear delineation between administrator roles, management roles, and regular roles. Each role definition should only be able to access resources associated with that role.
Incoming data can be validated before being passed off to subsystems. This protects against garbage/fuzzing attacks on REST API endpoints. JSON Schema is a particularly useful strategy when attempting to prevent bad data injection and basic parameter checking.
A public key service is provided that allows the storage and publishing of public key data. This service enables a distributed public key infrastructure for the system, enabling remote websites and programs to receive messages created by the system and then verify the validity of the messages by checking the digital signature on the message. Verifying agents must access the public key service to fetch the key information needed for the verification step.
Strong protection of REST API resources is possible using asymmetric keys (digital signatures). This is in addition to something like an API key used over HTTPS. There is support code for creating and storing x509 public/private key pairs.
There is a simple rate limiter for protecting against Denial of Service and Distributed Denial of Service attacks.
The ability to digitally sign and encrypt JSON data and publish it in a way that can be easily verified via the Web or intranet.
Linked Data is a way to create a network of machine interpretable data across different documents and Web sites. It allows an application to start at one piece of Linked Data, and follow embedded links to other pieces of Linked Data that are hosted on different sites across the Web. The Linked Data formats used by the system include JSON-LD and RDFa.
A Linked Data identity system is provided to assign URL identifiers to the people and organizations that use the system. The public portion of the identities, such as names, and publicly available cryptographic public key data, is published in a machine-readable way. Access to the identity information is based on a role-based access control system that also allows private data to be read by authorized agents.
A single identity in the system may be associated with multiple other identities for separate purposes, such as a personal identity and a business identity.
Parsers to read in and convert both JSON-LD and RDFa to native data formats and modify, translate, and process the information are included. Converters are included to translate from JSON-LD to RDFa and vice-versa.
Subsystems are provided that implement the Secure Messaging specification enabling JSON-LD to be normalized, hashed, and digitally signed. The subsystem also enables the verification of any RDF data that has been digitally signed.
The customer experience subsystems are designed to ensure that the customers that use the system have a pleasant experience. This involves ensuring that the interface is elegant, responsive, and works across a variety of mobile, tablet, and desktop devices.
A front-end HTML templating system is provided that supports dynamic views and compiled view caching. The static or dynamic pages of a site is able to be overridden, allowing for the core web server framework to provide basic pages while the product pages override certain aspects of the basic page. This is useful for DRY-based design in template code, allowing pages to be layered on a case-by-case basis. Rich support for scalable icons should also be included.
Preliminary internationalization support is included such that particular parts of an interface could be translated to other languages.
Many Web applications (HTML + CSS + JavaScript + Fonts) can grow to be a megabyte or more in size per page hit. Bedrock contains a good minification subsystem which provides good debugging support in development mode and optimized HTML+CSS+Javascript in production mode.
Basic UI widgets are available: stackable modals, popovers, navbar hovercard, duplicate ID checker, generic modal alert, common alert display, tabs, bootstrap-styled form inputs, and help toggle.
A lazy-compilation widget is also available; it can drastically improve initial page and widget-readiness for complex UIs by delaying compilation until it's needed.
The developer tooling allows software engineers to easily build new applications on top of the framework, debug the system when problems arise, generate good testing code coverage for the system, and ensure that bug regressions are caught before deploying the software to production.
A modular testing subsystem that is capable of running both backend unit tests and browser-based frontend tests. The tests are designed to be run inside continuous integration frameworks to provide constant feedback on code coverage and test status as new changes are made to the software built using Bedrock.
An exception reporting system is useful when errors happen in the depths of a module and you want them to bubble up to the REST API. The error system supports chained exception reporting to aid tracing/debugging the system in development mode. In production mode, the detailed errors are not shown because sensitive data about the system may be surfaced.
The web application framework is contained in a typical npm package that can be installed as a dependency. The framework is able to be extended by the project using it via an extensible configuration system and a layered front-end design.