Total Pageviews

Translate

June 11, 2018

AWS Logging and Analysis - Part 4.2 - Using AWS Lambdas for Real-time Log filtering based on AWS Services

by 4hathacker  |  in AWS Logging and Analysis at  7:45 AM

Hi folks!!!

Welcome to AWS Logging and Analysis quest...


In the previous article, we have successfully configured the AWS Lambdas for log filtering and here we will discuss about writing a Lambda function. To write any Lambda function in AWS, we have to follow certain patterns and concepts. These include:

1. Lambda Function Handler: AWS Lambda invoke this handler when the service executes our code. This handler can take two values as arguments viz. event(to pass in event data to the handler) and context(to provide runtime information to handler).

2. Context Object: It is the same "context", we have passed in the Lambda function handler. It has the ability to interact with AWS Lambda Service. We can get information like log stream name, aws request ID, log group name, etc.

3. Logging: Lambda function by default comes with a CloudWatch policy to write its own log. Refer previous article in which, we have added a S3 Full Access policy to "my_logfilter_role" in IAM. There was already a policy present. We can append to these logs in Lambda Code either by writing a "print" statement or by using Logger functions from logging module.

4. Function Errors: To check for exceptions and errors in the Lambda function, we can capture the the failure information in a JSON stackTrace array.

To read more about AWS Lambda programming, please go through the AWS Lambdas Documentation.

Lets start writing our Lambda function now.

1. From the events, we will extract "awslogs" attribute - "data" in the String object outEvent.

2. The data in this attribute is usually a zipped data. To uncompress, use Python's zlib-library and decode to base64.

3. Post that, parse the data to obtain JSON.

4. Create a dictionary to filter the events. This dictionary includes the key value pairs, where key refers to the service name while value refers to the event names. These event names are to be searched inside the JSON payload.e.g.
services = {
          "ec2":  (RunInstances, StopInstances),
           "s3":  (CreateBucket, DeleteBucket)
}

5. Post search for event name, the raw log extract will be dumped in the form of JSON in the respective folders in S3 bucket.

The AWS Lambda code will finally look like this.

You can find the source code at my Github repo.

6. To check whether the function is working fine or not, I have enabled the AWS Lambda trigger first and stopped one of the ec2 running instances with instance ID ending with c5a1. The CloudTrail Management Console will got the log of the particular event after 5-7 minutes. Check the same in the log snapshot from the CloudTrail below.



7.  I went to the S3 bucket - "all-logs-filtered-123" and checked for the "StopInstances" event in the ec2 folder. To see for the data, I have selected the JSON and clicked the "Select From" button. There in the Preview pane, I found the same instance ID. I have copied the content in JSON to Notepad++.

8. For further confirmation, I have noted the time of creation of "StopInstances" file. I opened "all-logs-bucket123", checked for the json.gz file which arrived at almost the same time as of the "StopInstances" folder.

Upon further exploration, I found that the "StopInstances" event is present in the file which was created exactly 1 second before the filtered log file. This I also copied from Preview Pane to Notepad++. Please find below the screenshot , in which I have gathered the JSON of both the filtered and unfiltered file. The unfiltered file has collection of log in 4021 lines while the single event filtered log file is comprised of 58 lines of JSON content.

This confirms that the flow of logs along with my AWS Lambda function is working appropriately for the events mentioned in the services dictionary of the Lambda function.




0 comments:

Like Our Facebook Page

Nitin Sharma's DEV Profile
Proudly Designed by 4hathacker.