Launching an application on AWS Cloud using EC2,EFS,S3 and cloudFront
Here we will launch an application on AWS using EFS. This is similar to previous blog using instead of EBS we use EFS Link For previous blog :https://medium.com/@rohitraut3366/launching-the-complete-infrastructure-in-aws-using-terraform-b0782e6895cf
Here the step by step process which is easy to understand.
Step 1 : Creating KeyPair and Security group allowing port 22(SSH), 80 and 2049(NFS).
Step 2: Create an S3 bucket and uploading image in S3.
Step 3: Creating Cloud Front using S3 Created in step3.
Step 4: Writing Bucket Policy so cloud front can access the objects in s3.
Step 5: Creating EFS(Elastic File System) and Mouting it to all three subnets.
Step 6: Launching two EC2 instances one for webserver and one for testing.
Step 7: Mounting EFS to /var/www/html.
LET’S GET STARTED!!
We have to know why we need EFS and what is EFS?
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.It provides a cetralized storage and we can able to connect it to multiple instances which is not in case of EBS.
For lauching this infrastructure we will use teraform .
Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language.Terraform enables you to safely and predictably create, change, and improve infrastructure.
Create a IAM user in aws account and configure a profile in cmd using secret and access key.
aws configure --profile rohit
Terraform Code
Adding AWS provider in terraform file
provider "aws" {region = "ap-south-1"profile = "rohit"}
Step 1 : Creating KeyPair and Security group allowing port 22(SSH), 80 and 2049(NFS).
Key Pairs
resource "tls_private_key" "key" {algorithm = "RSA"ecdsa_curve = "2048"}resource "aws_key_pair" "key_reg" {key_name = "mykeyEfs"public_key = tls_private_key.key.public_key_openssh}resource "local_file" "priavte_key" {content = tls_private_key.key.private_key_pemfilename = "mykey.pem"}
Here we used three resources tls_private_key(it basically generates secure private key), aws_key_pair( This key pair’s public key will be registered with AWS to allow logging-in to EC2 instance which is created by tls_private_key),local_file( it store private key in local computer so we can login to EC2 instance using SSH).
Security Groups
resource "aws_security_group" "Security" {name = "EC2_EFS_SECURITY_GROUP"description = "EC2"ingress {description = "webserver"from_port = 80to_port = 80protocol = "tcp"cidr_blocks = ["0.0.0.0/0"]}ingress {description = "ICMP ssh protocol NFS"from_port = 22to_port = 22protocol = "tcp"cidr_blocks = ["0.0.0.0/0"]}ingress {description = "NFS"from_port = 2049to_port = 2049protocol = "tcp"cidr_blocks = ["0.0.0.0/0"]}egress {from_port = 0to_port = 0protocol = "-1"cidr_blocks = ["0.0.0.0/0"]}tags = {Name = "SECURITY_GROUP_t2"}}
Step 2: Create an S3 bucket and uploading image in S3.
resource "aws_s3_bucket" "b" {bucket = "mybucket123123123123"acl = "private"tags = {Name = "My bucket" }}resource "aws_s3_bucket_object" "obj" {key = "Img.png"bucket = aws_s3_bucket.b.idsource = "abc.png"acl = "public-read"}
Step 3: Creating Cloud Front using S3 Created in step3.
resource "aws_cloudfront_distribution" "s3_distribution" {origin {domain_name = aws_s3_bucket.b.bucket_regional_domain_nameorigin_id = local.s3_origin_ids3_origin_config {origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}}enabled = truedefault_cache_behavior {allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]cached_methods = ["GET", "HEAD"]target_origin_id = local.s3_origin_idforwarded_values {query_string = falsecookies {forward = "none"}}viewer_protocol_policy = "allow-all"min_ttl = 0default_ttl = 3600max_ttl = 86400}# Cache behavior with precedence 0ordered_cache_behavior {path_pattern = "*"allowed_methods = ["GET", "HEAD", "OPTIONS"]cached_methods = ["GET", "HEAD", "OPTIONS"]target_origin_id = local.s3_origin_idforwarded_values {query_string = falseheaders = ["Origin"]cookies {forward = "none"}}min_ttl = 0default_ttl = 86400max_ttl = 31536000compress = trueviewer_protocol_policy = "redirect-to-https"}restrictions {geo_restriction {restriction_type = "none"}}tags = {Environment = "production"}viewer_certificate {cloudfront_default_certificate = true}}
Step 4: Writing Bucket Policy so cloud front can access the objects in s3.
data "aws_iam_policy_document" "distribution" {statement {actions = ["s3:GetObject"]principals {type = "AWS"identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]}resources = ["${aws_s3_bucket.b.arn}/*"]}}resource "aws_s3_bucket_policy" "web_distribution" {bucket = aws_s3_bucket.b.idpolicy = data.aws_iam_policy_document.distribution.json}
Step 5: Creating EFS(Elastic File System) and Mouting it to all three subnets.
resource "aws_efs_file_system" "nat" {creation_token = "webserver-efs"tags = {Name = "mystorage"}}
resource "aws_efs_mount_target" "efs_1a" {file_system_id = aws_efs_file_system.nat.idsubnet_id = "subnet-061d276e"security_groups = ["${aws_security_group.Security.id}"]}
resource "aws_efs_mount_target" "efs_1b" {file_system_id = aws_efs_file_system.nat.idsubnet_id = "subnet-7d1a7131"security_groups = ["${aws_security_group.Security.id}"]}
resource "aws_efs_mount_target" "efs_1c" {file_system_id = aws_efs_file_system.nat.idsubnet_id = "subnet-fd07b586"security_groups = ["${aws_security_group.Security.id}"]}
Step 6: Launching two EC2 instances one for webserver and one for testing of EFS connection.
resource "aws_instance" "web" {depends_on = [aws_security_group.Security]ami = "ami-0732b62d310b80e97"instance_type = "t2.micro"key_name = aws_key_pair.key_reg.key_namesecurity_groups = ["${aws_security_group.Security.name}"]tags = {Name = "webserver1"}}resource "aws_instance" "web2" {depends_on = [aws_security_group.Security]ami = "ami-0732b62d310b80e97"instance_type = "t2.micro"key_name = aws_key_pair.key_reg.key_namesecurity_groups = ["${aws_security_group.Security.name}"]tags = {Name = "Backup@/efs"}}
Step 7: Configuring EC2 and Mounting EFS to /var/www/html and cluster3 resource for testing.
resource "null_resource" "cluster" {depends_on = [aws_instance.web,aws_efs_file_system.nat,aws_efs_mount_target.efs_1c,aws_efs_mount_target.efs_1b,aws_efs_mount_target.efs_1a]connection {type = "ssh"user = "ec2-user"private_key = tls_private_key.key.private_key_pemhost = aws_instance.web.public_ip}provisioner "remote-exec" {inline = ["sudo yum install git httpd php -y","sudo systemctl start httpd","sudo systemctl enable httpd","sudo yum install amazon-efs-utils nfs-utils -y","sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ${aws_efs_file_system.nat.id}.efs.ap-south-1.amazonaws.com:/ /var/www/html","sudo rm -rf /var/www/html/*","sudo git clone https://github.com/rohitraut3366/mulicloud.git /var/www/html"]}}resource "null_resource" "cluster3" {depends_on = [aws_efs_file_system.nat,null_resource.cluster]connection {type = "ssh"user = "ec2-user"private_key = tls_private_key.key.private_key_pemhost = aws_instance.web2.public_ip}provisioner "remote-exec" {inline = ["sudo yum install amazon-efs-utils nfs-utils -y","sudo mkdir /efs","sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ${aws_efs_file_system.nat.id}.efs.ap-south-1.amazonaws.com:/ /efs",]}}
Now save all the code in .tf file and run following commands.
terraform init
terraform apply --auto-approve
Change the url of Image in html files at /var/www/html using cloudFront url
To destroy the Environment
terraform destroy --auto-approve
GitHub Link : https://github.com/rohitraut3366/AWS_Terraform_EFS_EC2.git