Skip to main content

Posts

Showing posts with the label AWS

Cloudtrail Event History AWS learning

A question: is there the same data that is shown in cloud trail event history dashboard vs exportin using trail to s3 for example Answer : The data shown in the CloudTrail Event History dashboard and the data exported using CloudTrail to an S3 bucket are generally the same, but there are some differences in the way they are presented and accessed. CloudTrail Event History dashboard: The CloudTrail Event History dashboard is a web-based interface provided by AWS that allows you to view and search CloudTrail events. It provides a user-friendly interface with filtering and searching capabilities, making it easy to explore and analyze events. The Event History dashboard displays events in near real-time, providing a real-time view of your CloudTrail logs. Exporting CloudTrail logs to S3: CloudTrail logs can be exported to an S3 bucket for long-term storage, backup, and analysis. When exporting logs to S3, the logs are saved as JSON files in the specified bucket. Exported logs include detai

Migrate instance in autoscaling group to another subnet microservices with consul

To migrate an instance from one subnet to another subnet without downtime while using Auto Scaling and an Application Load Balancer (ALB), you can follow these steps: Create the target subnet : Set up the new subnet where you want to migrate your instance. Ensure that the subnet has the necessary configurations and resources required for your instance. Prepare the target instance : Launch a new instance in the target subnet with the desired configuration and AMI. This instance will be used as the replacement for the instance in the source subnet. Attach the target instance to the Auto Scaling group : Add the target instance to the Auto Scaling group that manages your existing instances. This ensures that the new instance is automatically managed by the Auto Scaling group and is part of the fleet. Configure the target instance : Set up the target instance to match the configuration of the existing instance. This may involve installing the necessary software, libraries, and configuration

S3 Bucket Security

Enabling Block Public Access on an Amazon S3 bucket is an essential security measure to prevent accidental exposure of your data to the public. In addition to Block Public Access, here are some other security improvements you can implement and their considerations: Limiting Source IP: By configuring bucket policies or access control lists (ACLs) to allow access only from specific IP addresses or IP ranges, you can further restrict access to your bucket. This helps mitigate the risk of unauthorized access from unknown or potentially malicious sources. Considerations include: Ensure that you accurately define and maintain the allowed IP addresses or ranges to avoid inadvertently blocking legitimate access. Regularly review and update the IP restrictions as needed, considering changes in your infrastructure or authorized users' locations. Versioning : Enabling versioning for your S3 bucket allows you to retain multiple versions of an object over time. This feature provides added secu

How to run the dependencies on AWS Lambda

We can create a deployment package that includes the dependencies and the source code for the function. Here are steps to create the deployment package: Create a virtual environment for the project: python3 -m venv myenv-coding  source myenv-coding/bin/activate Note: for custom the specific python version, we can use command python3.8 -m ... or use soft link /usr/local/bin/python3 to the Python Path Install the dependencies in the virtual environment: Example we need below dependencies: pip install boto3 pip install argparse pip install gspread Package the virtual environment and the source code into a deployment package: cd myenv-coding/lib/python3.8/site-packages/  zip -r9 ${OLDPWD} /function.zip .  cd $OLDPWD zip -g function.zip <source_code>.py Upload the deployment package to AWS Lambda using the AWS Management Console, AWS CLI, or the AWS SDK. We can then test the function in the AWS Lambda Console to verify that it works as expected. Thank You.