This topic describes how Unomaly processes events, from the moment the streaming data arrives to the way the events are presented in the user interface.

Analyzing events in Unomaly

The goal of Unomaly is to determine the relevance of each event it is currently processing, by comparing each event to the learnings database it creates for each system.

Event Scoring diagram

Summary of how Unomaly categorizes and scores incoming events.

Comparing the structure of events

During training, Unomaly analyzes the structures of each incoming event to learn the normal behavior of the system. Unomaly does not need pre-defined templates or log formats to understand structures of events. Instead, Unomaly builds a database of event structures based on the incoming events it analyzes. This database is referred to as the learnings.

The learnings database contains a summary of all past observations of data that Unomaly recieves. Each structure represents a normal event for at least one system. Unomaly uses these events to describe the system’s baseline behavior, which is how the system behaves under normal conditions. This summary is saved as the system profile.

After training completes, Unomaly determines the relevancy of each new individual event by comparing its structure to the learnings database. If the event exists in the learnings database, the event gets labeled based on the information in the learnings: it may be a normal event or a known event. Because normal events are repetitive, they are not considered very relevant for analysis. An event is anomalous if it does not match any event in the learnings database.

Clustering and scoring anomalous events

After analyzing the structure of new events and comparing it to the learnings database, Unomaly assigns the event a score that describes how anomalous the event is. If the event is anomalous, Unomaly adds it to a situation. Each situation is a collection of anomalous events during a related time period for a single system.

Monitoring situations and knowns

When Unomaly detects an anomaly, it has learned the event. This means that the next time that the event occurs, Unomaly will not highlight it as an anomaly. If this anomalous event is one that you want to track each time it occurs, you can save it as a known. When you create a known, you add descriptions and tags to the event to explain what the event means and how to resolve it. Read more about “Add knowns to prioritize event scoring”.

Unomaly evaluates situations against a defined set of conditions that determines whether to execute an action for the event. When one of your systems goes offline, when a specific critical known is detected, or when the production environment produces a significant increase in anomalies, you want Unomaly to take action. This action can be to send an email to a specific user, to post to a team chat room, or to flag the event to review later. Read more about how to “Configure actions and notifications”.

Another way to monitor situations on a system is to filter for the conditions you want to track and save the filtered results as a View. You can use the view to monitor your data for future occurrences of the same of similiar situations. Read more about how to “Create views to save workflows”.

Continously update existing learnings

Every event that Unomaly analyzes impacts the learnings database in one way or another. Anomalous events get added as a first occurrence. Repetitive events impact the statistics of that event for frequency, number of total occurrences, and so on. The algorithms detect common patterns and can begin merging learnings.

This means that the learnings database grows with anomalies, and the merging allows Unomaly to keep a dense summary of all the data seen, from first event to last. By doing this, Unomaly gets more efficient with time. More data means a better summary. The result of the learning process is transparent and happens continuously. Users can inspect the learning summaries by inspecting a system’s profile.

Store analyzed events

Finally, the analyzed event is retained in a database. The raw event gets saved in the FIFO-event database. If the event is an anomaly or a known event, its storage is prioritized over other events. Time series are updated so that the graphs include the event. The updated situation, learning, and so on, is persisted and available in the web interface and the API.