Rhino Carrier Grade System | Rhino SLEE Features | Carrier Grade Telecoms Applications
The foundation of Rhino is a fault-tolerant core
that provides continuous availability, service logic execution and on-line management even during network outages, hardware
platform failure, software failure and maintenance operations.
The Rhino fault-tolerant core
- An N-way active-active, symmetric, software fault tolerance architecture (This may be
activated and used on a service by service basis - without any specific resilience coding being required in the service. For services that are configured to use this capability, a
"call" or "session" does not fail when a cluster member fails).
- Optimal, scaleable and
efficient use of all available hardware as Rhino uses symmetric software fault tolerance (not a redundant, hot-standby
- Single system image management regardless of cluster architecture. This means that each
service can be managed as a single, logical entity, regardless of the actual physical distribution used for redundancy and
- On-line application upgrade and on-line application server upgrade.
monitoring and self healing for a reliable and predictable system.
What does "Carrier Grade"
The term Carrier Grade does not have a universally accepted definition. The requirements for a Carrier Grade system are generally accepted to have come from the Bellcore LATA Switching Systems General Requirements 1x series of
documents. OpenCloud is explicit about how the Rhino product complies with the term "Carrier
The following table details how Rhino meets the Carrier Grade
Requirements ||Rhino SLEE Features and Performance |
|Performance || |
- Rhino monitors its own performance and can
detect overload situations. Rhino will reject incoming work so it can process in-progress work within acceptable operating
parameters and avoid a potential failure due to overload.
|Reliability and Availability || |
- Rhino is a clustered set of nodes.
- A Rhino
is capable of surviving multiple simultaneous server node failures and network partitioning. Services are continuously
available, even in the case of a cluster node failure.
- Rhino automatically detects
node failures, re-configures itself and fails-over in-progress work to the remaining members of the cluster.
|Scalability || |
- Rhino monitors its own behaviour using this information as part of its load sharing and work allocation strategies
where work is dynamically allocated to nodes in the cluster.
- New nodes may be
dynamically added to a running Rhino SLEE cluster. Rhino will automatically detect the new node, re-configure itself and
transparently bring the new node up-to-date with the rest of the cluster.
Rhino node scales linearly with respect to the number of CPUs and the speed of the CPUs on the machine it is running
- The support for online upgrade of the Rhino SLEE, dynamic re-configuration of
the cluster and the self-monitoring based load sharing strategies means it simple to increase the performance of a running
cluster by upgrading the hardware the cluster is running on - whilst the cluster is still active and processing
|Failure Containment || |
- Rhino detects
node failures and automatically re-configures itself. The Rhino SLEE then performs fail-over and reallocation of work as
Redundancy || |
Rhino server node is a peer in that it has the same responsibilities as every other node (no single point of failure and
efficient use of processing power).
- The state of services is replicated to all
nodes in Rhino cluster as is the state of Rhino itself.
- As all members in the
cluster are peers, nodes can be dynamically added and removed from the cluster.
|Management and Maintainability || |
- The Rhino SLEE is managed as a single system image. This management is achieved by using the Management
operations are performed as part of a transaction, so that either all nodes perform the appropriate action, or the
management operation 'rolls back'.
- Management of the system is always available as
each node provides a management service that receives management commands from the system administrator.