Single instance and distributed deployments
A Unomaly instance can have one of the following roles: Standalone, Manager, or Worker. The selection of role is done on each individual instance in the console menu. For single instance deployments, instances need to be configured as Standalone. Distributed deployments are comprised of a Manager instance and many Worker instances.
A Standalone instance runs all services and houses the entire functionality that Unomaly provides. Every newly installed instance is always by default a standalone instance. Standalone instances does everything from receiving data, analyzing it, saving the results and the actual raw data, and providing the UI and other user interaction functionallity. In that sense, it is standalone and requires nothing else (other than data).
A Manager instance is the brain of a distributed Unomaly deployment. A cluster can have only one manager. A manager has all the metadata, learning, intelligence and primary data and is independent of workers. It also houses the user interaction features and the license information.
A Worker instance is a data crunching instance. A manager can have up to 25 workers connected to it. A worker instance receives data from systems, crunches it and syncs learning and results of analysis to the manager. It does everything in memory and has no state. The only thing that gets stored on the worker is raw log data, which is searchable by end users through the manager.
From a software perspective, each Unomaly instance is identical–Meaning that every instance can run the same services and perform the same functions. Each instance has its own console menu and supports accessing the shell on the individual instance for administrative tasks and troubleshooting.
How distributed deployments work
Workers and a Manager interact with each other over encrypted channels. Each worker is responsible for setting up that connection to their associated manager. Once that happens, it registers itself for availability to the manager and gets access to the intelligence that the manager provides.
How an event is processed
- Worker(s) receive data from system(s)
- Worker dynamically loads the information it needs from the manager in order to analyze and classify each specific event.
- Worker caches (in memory) the intelligence it has gather from the manager
- Results are sent to the manager, that in turn updates baselines, statistics, graphs etc
- The raw event, no matter normal or not, is saved locally on the worker and is searchable from the manager.
How data is retrieved by the end user
All end-user interaction is done to the manager over HTTPS. Workers are transparent to the user, the workers communicate with the manager over SSH.
- User authenticates to a database local to the manager
- User retrieves system profiles, graphs, statistics, metrics, situations, reports and settings from databases local to the manager
- User retrieves raw events (the event tab) transparently from worker(s).
Did this article help you?
Thank you for the feedback!