Rhino Carrier Grade System | Rhino SLEE Features | Carrier Grade Telecoms Applications

By: Opencloud  05-Apr-2012

The foundation of Rhino is a fault-tolerant core that provides continuous availability, service logic execution and on-line management even during network outages, hardware platform failure, software failure and maintenance operations.

The Rhino fault-tolerant core delivers:

  • An N-way active-active, symmetric, software fault tolerance architecture (This may be activated and used on a service by service basis - without any specific resilience coding being required in the service. For services that are configured to use this capability, a "call" or "session" does not fail when a cluster member fails).
  • Optimal, scaleable and efficient use of all available hardware as Rhino uses symmetric software fault tolerance (not a redundant, hot-standby approach).
  • Single system image management regardless of cluster architecture. This means that each service can be managed as a single, logical entity, regardless of the actual physical distribution used for redundancy and scaling.
  • On-line application upgrade and on-line application server upgrade.
  • Self monitoring and self healing for a reliable and predictable system.

What does "Carrier Grade" mean?

The term Carrier Grade does not have a universally accepted definition. The requirements for a Carrier Grade system are generally accepted to have come from the Bellcore LATA Switching Systems General Requirements 1x series of documents. OpenCloud is explicit about how the Rhino product complies with the term "Carrier Grade."

The following table details how Rhino meets the Carrier Grade requirements.

Carrier Grade Requirements Rhino SLEE Features and Performance
Performance
  • Rhino monitors its own performance and can detect overload situations. Rhino will reject incoming work so it can process in-progress work within acceptable operating parameters and avoid a potential failure due to overload.
Reliability and Availability
  • Rhino is a clustered set of nodes.
  • A Rhino is capable of surviving multiple simultaneous server node failures and network partitioning. Services are continuously available, even in the case of a cluster node failure.
  • Rhino automatically detects node failures, re-configures itself and fails-over in-progress work to the remaining members of the cluster.
Scalability
  • Rhino monitors its own behaviour using this information as part of its load sharing and work allocation strategies where work is dynamically allocated to nodes in the cluster.
  • New nodes may be dynamically added to a running Rhino SLEE cluster. Rhino will automatically detect the new node, re-configure itself and transparently bring the new node up-to-date with the rest of the cluster.
  • Each Rhino node scales linearly with respect to the number of CPUs and the speed of the CPUs on the machine it is running on.
  • The support for online upgrade of the Rhino SLEE, dynamic re-configuration of the cluster and the self-monitoring based load sharing strategies means it simple to increase the performance of a running cluster by upgrading the hardware the cluster is running on - whilst the cluster is still active and processing services.
Failure Containment
  • Rhino detects node failures and automatically re-configures itself. The Rhino SLEE then performs fail-over and reallocation of work as needed.
Built in Redundancy
  • Each Rhino server node is a peer in that it has the same responsibilities as every other node (no single point of failure and efficient use of processing power).
  • The state of services is replicated to all nodes in Rhino cluster as is the state of Rhino itself.
  • As all members in the cluster are peers, nodes can be dynamically added and removed from the cluster.
Management and Maintainability
  • The Rhino SLEE is managed as a single system image. This management is achieved by using the Management operations are performed as part of a transaction, so that either all nodes perform the appropriate action, or the management operation 'rolls back'.
  • Management of the system is always available as each node provides a management service that receives management commands from the system administrator.


Other products and services from Opencloud

05-Apr-2012

OpenCloud Rhino Application Products | Rhino Telecom Applications | Rhino 2.2

JSLEE specifies an asynchronous run-time environment which allows telecommunication systems to be modelled as Finite State Machines connecting to a number of external systems by asynchronous signalling protocols. To support independent development of applications and services on these platforms, OpenCloud provides an array of information and tools for developers.


05-Apr-2012

Resource Adaptor Architecture | Rhino Application Server Resource Adaptor

Further RAs can be added by any competent Java developer, so legacy equipment can be readily integrated and accessed, as well as proprietary extensions to the standard protocols often used by equipment providers. The Rhino Telecom Application Server provides integration capabilities via an extensible plug-in architecture known as the Resource Adaptor Architecture.


05-Apr-2012

Rhino Real-time Telecommunications Solutions | Telecom Application Server | JAIN SLEE 1.1 Compliant

Savanna Carrier-grade framework – this provides the “5 x 9”s capability of the platform, including service and platform fail-over, on-line upgrade, scalability and availability by clustering, distributed memory management, lock management etc.


05-Apr-2012

SIP Protocol | IMS and SIP Network for Telecommunications | SIP Gateway

This means that new and variant protocols may be added to a Rhino Telecom Application Server whilst services are running with no requirement for down-time or problems with service incompatibilities due to their requiring specific protocol versions.