Total Pageviews

Translate

June 10, 2018

AWS Logging and Analysis - Part 4.1 - Using AWS Lambdas for Real-time Log filtering based on AWS Services

by 4hathacker  |  in AWS Logging and Analysis at  1:34 PM

Hi folks!!!

Welcome to AWS Logging and Analysis quest...


In the previous articles, we have discussed about CloudWatch, CloudTrail, and S3 and utilized all of these services to analyse the logs of events. In this article, we will go through AWS Lambdas and will create a simple function using AWS Python SDK - boto3 to extract the event logs based on event names and services. Post filter, either we can push them somewhere or save them in S3 bucket in different folders.

AWS Lambda functions gives us the potential to run a code without provisioning or managing servers. We do have to pay for this service but only for the time the function consumes, there is no charge when our code is not running. I feel this as a huge relaxation provided by AWS. A specific configuration information like amount of memory, maximum execution time, policy for that Lambda execution, etc. is required before Lambda execution. Due to the limitation of this article, I am not able to cover up all, but I will try to cover as much as information being related to this article. Please feel free to read about AWS Lambdas in AWS documentation.

Lets go through each step first in abstract form to better understand the scenario.

A) Create a Lambda execution role to be assigned to Lambda functions so that it can access the CloudWatch logs for writing its own logs and saving filtered events in S3 bucket.

B) Associate a trigger to Lambda function so that it will execute automatically at times when events are coming to CloudWatch from the trail that we have configured in CloudTrail in Part 2 of this series.

C) Create S3 buckets and different folders (if required) to save the filtered events.

D) To check S3 access activities for saving filtered logs, we can enable S3 bucket access logging. This is an optional step.

E) Write the high level code in Lambda function editor using any AWS SDK like Python, Node.js, etc. and set proper basic execution settings.

Here we will start with creating a Lambda function "my_log_filter" or "my_log_filter_role".

1. Go to, Lambda --> Functions --> Create function --> Author from scratch

2. A pop up will appear for creating a custom role for Lambda execution. Create a basic IAM role for our Lambda function as "my_log_filter_role" with the default policy statement. We will alter it later accordingly.

3. Finally hit Create function button.

4. Associate a trigger by selecting "CloudWatch Logs" from the left "Add Triggers" pane.

5. For trigger configuration, go down. We will select the log group that we have created in previous article which is "CloudTrail/DefaultLogGroup" and put filter name as "my_log_filter_trigger".

Note: Filter Pattern must remain empty. Only one filter can be applied at a time for a single lambda function. We do not want the log events to come with any filter, rather this will give us complete log record at every occurrence of logging event.

6. Disable the trigger and click "Add" and then "Save" button.
7. Go to S3, and create a bucket as "all-logs-filtered-123". Inside the bucket, create folders for ec2, vpc and s3 service log capturing.

8. Now, we want the logs to be automatically written to S3. So, we need to provide Lambda the power to write in S3. Go to IAM roles, and alter the "my_log_filter_role" to add S3 read-write access. Here we have added the "S3FullAccess" policy to the role.

9. Finally, we will write the code for Lambda function in the Lambda Editor.

That's all for this article. We will write and test the Lambda Code in AWS Lambdas in the next article and follow up from here.






0 comments:

Like Our Facebook Page

Nitin Sharma's DEV Profile
Proudly Designed by 4hathacker.