Skip to main content
Rules cheat sheet

This tutorial shows you how to use Coralogix parsing rules, best practices, and what to check if your rule is not working

Amir Raz avatar
Written by Amir Raz
Updated over 5 years ago

Rules help you to take full advantage of Coralogix capabilities, you can create your own log formatting using Coralogix parsing rules, for example, convert plain text logs into JSON logs, extract specific data from the log message to it’s own new JSON key. Based on REGEX, Coralogix parsing rules altered the text prior to the actual Indexing in Elasticsearch so you have full control over the structure of your data.

Rules Types

Extract

You can extract any data within the log, using an extract rule. You can extract the data into one of Coralogix metadata fields (severity, category, className, methodName, threadId) or extract that data into a new JSON key that will be added to your log. The rule will be applied on any log that matches our REGEX criteria, the original log text/JSON remains in full. A common use case will be to extract the severity tag from the log payload, if exists, and override the log severity key when such was not mentioned in the log metadata. Another common action is to extract data from the log message in order to query the logs more easily, for example, extract an Id that was sent as part of the message text into a new JSON field in order to perform range queries with it, you will be able to perform numeric aggregations once the Id data is stored in its own field.

Example

Before

{

   "application": "prod",

   "subsystem": "platform",

   "status": 200,

   "message": "user login was successful user id=44028351"

}

REGEX

“message”\s*:\s*”.*id=(?P<userId>[0-9]+)

After rule was applied

{

   "application": "prod",

   "subsystem": "platform",

   "status": 200,

   "message": "user login was successful user id=44028351",

   "userId": "44028351"

}

Replace

You can replace any data within the log using a replace rule, for example renaming a field name to fix a mapping exception in your logs, masking sensitive data such as credit card numbers, etc.

***To learn more about mapping exception, visit our post.

Example1 

Message as a string

{

"message": "The request has succeeded."

}

Message as an object

{

   "message":
   {

       "status_code": 200,

       "code definition": "The request has succeeded."

   }

}

In Elasticsearch, the type of keys is set dynamically, sending two logs with the same field name and with different type of data (string ; object) will result in a race condition to the index mapping and message key type will be either string or object, depends which log came first. Eventually, all future logs with the opposite data types (for example, the type in the mapping of field message is string and logs are flowing with message as object) will encounter a mapping exception.

Replace message with message_object will set the name of the key as "message_object" that will have its own mapping (object) in our Index.

REGEX 

"message"\s*:\s*{

Replace to 

"message_object": {

Result 

{
   "message_object":
   {

       "status_code": 200,

       "code definition": "The request has succeeded."

   }
}

Logs with a string value in the message will remain unchanged, the field name remains “message”.

Example2

The log

{
   "name":"John Smith",

   "credit_number":"1234-5678-1234-5678",

   "expiration_date":"01/01",

   "id_number":123456789
}

Replace To 

"credit_number":“XXXX-XXXX-XXXX

Result 

{

   "name":"John Smith",

   "credit_number":"XXXX-XXXX-XXXX-5678",

   "expiration_date":"01/01",

   "id_number":123456789

}

Block

Any log that matches a block rule’s REGEX will be blocked and won’t be indexed. In order to block a log, the REGEX should match any text provided in the log.

*Note - Blocked logs are not counted in your daily quota.

Example

Our REGEX

”app_name”\s*:\s*”(web|mobile|IOS)”

The following log will get blocked

{
   “app_name”:”IOS”,

   “subsystem_name”:”iPhone”,

   “timestamp”:1551774844957
}

Allow

With the usage of allow rules, you can filter your data and see only the logs that match your REGEX.

Example:  you have these 4 logs

1)

{
   "applicationName":"web_entry",

   "ServerName":"eu-01",

   "category":"coralogix",

   "threadID":"u@G-QXEGeFz"
}

2)

{
   "applicationName":"web_entry",

   "ServerName":"eu-02",

   "category":"coralogix",

   "threadID":"bp&67z#wPNY"
}

3)

{
   "applicationName":"web_entry",

   "ServerName":"eu-03",

   "category":"coralogix",

   "threadID":"pYY8@ePae23H"
}

4)

{
   "applicationName":"web_entry",

   "ServerName":"eu-04",

   "category":"coralogix",

   "threadID":"FWRGJqz8YEB"
}

REGEX

"ServerName"\s*:\s*"eu-(02|04)"

Only logs 2 and 4 will pass through our filter.

JSON Extract

With JSON structured Logs, you can extract any field's content into one of Coralogix metadata fields (severity, category, className, methodName, threadId). A common use case is when you want to override the log severity, category, etc.

Example

In cases we have a JSON log message with a field mentioning the log severity while such was not assigned to the log metadata severity key (log severity will be set to 'DEBUG')  we'll want to extract the severity from that field within the log to Coralogix metadata severity key.

{
   "message": {

       "severity": "Info",

       "code definition": "The request has succeeded."
   }
}

REGEX

message.severity

Source Field

Text

Destination Field

Severity

Parse

Using parse rules, you can convert your unstructured log data (plain text) into JSON structure logs.

*** Note, when sending JSON structured logs, Coralogix automatically parse them into JSON structured logs and as such they are mapped to Elasticsearch. You can read more about this here: https://coralogix.com/tutorials/auto-json-parsing/

Example

Before

2019-08-08T18:28:32.450-0500 INFO NETWORK [initandlisten] waiting for connections on port 27017

REGEX

(?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}-[0-9]{4}) (?P<severity>[A-Z]+) [A-Z]+ \[[a-zA-Z0-9]+] (?P<message>[^\n]+)

After

{
   severity:"INFO"

   message:"waiting for connections on port 27017"

   timestamp:"2014-11-03T18:28:32.450-0500"
}

Troubleshooting our rules

  • Make sure you are creating the rule via REGEX builder on the basis of a log example

  • Check the REGEX on more than one log example

  • make sure you are taking the log example from either the 'LiveTail' (since it is the displaying the log right after all parsing rules was applied and before the ingestion to Elasticsearch) or from the 'Logs' screen, enter the log info-panel and copy the text from the 'raw text' option

  • Are our rules in the right order within the group? Are the groups in the right order? (since rules are running group by group and within a group you'll escape the group once you get a match)

  • If it is a block rule make sure he is placed alone in its group, no other rules with it

  • Defaultly, rules are applied on the entire log text. If you want to apply it on a specific field within the log (with JSON logs), make sure that your source field in the rule is set correctly or keep it as 'Text' and in your REGEX enforce a match only when you match the field's name and its text, for example, "key":"YOUR_REGEX"

Did this answer your question?