{"id":2773,"date":"2021-08-12T13:18:47","date_gmt":"2021-08-12T04:18:47","guid":{"rendered":"https:\/\/www.skyer9.pe.kr\/wordpress\/?p=2773"},"modified":"2021-08-26T21:03:15","modified_gmt":"2021-08-26T12:03:15","slug":"terraform-%ec%8b%9c%ec%9e%91%ed%95%98%ea%b8%b0","status":"publish","type":"post","link":"https:\/\/www.skyer9.pe.kr\/wordpress\/?p=2773","title":{"rendered":"Terraform \uc2dc\uc791\ud558\uae30"},"content":{"rendered":"<h1>Terraform \uc2dc\uc791\ud558\uae30<\/h1>\n<p><a href=\"https:\/\/learn.hashicorp.com\/tutorials\/terraform\/install-cli\">\uacf5\uc2dd\ubb38\uc11c<\/a><\/p>\n<p><a href=\"https:\/\/www.hashicorp.com\/c1m\">The One Million Container Challenge<\/a><\/p>\n<p><a href=\"https:\/\/github.com\/hashicorp\/c1m\">c1m github<\/a><\/p>\n<p><a href=\"https:\/\/www.hashicorp.com\/c2m\">The Two Million Container Challenge<\/a><\/p>\n<p><a href=\"https:\/\/github.com\/hashicorp\/c2m\">c2m github<\/a><\/p>\n<p><a href=\"https:\/\/www.44bits.io\/ko\/post\/what-is-the-best-editor-that-supports-terraform\">IntelliJ Plugin \uc124\uce58<\/a><\/p>\n<h2>\uc791\uc5c5\uc6a9 Terraform \uc11c\ubc84 \uc0dd\uc131<\/h2>\n<h3>Terraform \uc124\uce58<\/h3>\n<p><code>AWS AMI<\/code> \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.<br \/>\n\uba54\ubaa8\ub9ac\ub294 500M \uba74 \ucda9\ubd84\ud569\ub2c8\ub2e4.<\/p>\n<p>Terraform \uc744 \uc124\uce58\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">sudo yum install -y yum-utils\nsudo yum-config-manager --add-repo https:\/\/rpm.releases.hashicorp.com\/AmazonLinux\/hashicorp.repo\nsudo yum -y install terraform\n\nterraform -help<\/code><\/pre>\n<h3>IAM \uad8c\ud55c \uc124\uc815<\/h3>\n<p><a href=\"https:\/\/www.skyer9.pe.kr\/wordpress\/?p=2821\">\uc5ec\uae30<\/a> \ub97c \ucc38\uace0\ud574\uc11c IAM \uacc4\uc815\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.<\/p>\n<p><code>AmazonEC2FullAccess<\/code> \uad8c\ud55c\uc744 \ubd80\uc5ec\ud569\ub2c8\ub2e4.<\/p>\n<p>\ub2e4\uc6b4\ubc1b\uc740 CSV \ud30c\uc77c\uc744 \ucc38\uc870\ud574\uc11c <code>aws configure<\/code> \uc744 \uc2e4\ud589\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">aws configure\nAWS Access Key ID [None]: AKIA3VXXXXXXXXXX\nAWS Secret Access Key [None]: sDSWqBzAyunFXXXXXXXXXXXXXXXXXXXXXX\nDefault region name [None]: ap-northeast-2<\/code><\/pre>\n<pre><code class=\"language-bash\">mkdir mywork\ncd mywork<\/code><\/pre>\n<h2>EC2 \uc778\uc2a4\ud134\uc2a4 \uc0dd\uc131<\/h2>\n<pre><code class=\"language-bash\">vi main.tf\n-------------------------------------\nprovider &quot;aws&quot; {\n  profile = &quot;default&quot;\n  region  = &quot;ap-northeast-2&quot;\n}\n\nresource &quot;aws_instance&quot; &quot;my_server&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n\n  tags = {\n    Name = &quot;MyExampleAppServerInstance&quot;\n  }\n}\n-------------------------------------<\/code><\/pre>\n<p>AMI \uc774\ubbf8\uc9c0 ID \ub294 \uc544\ub798\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.<\/p>\n<p><a href=\"https:\/\/www.skyer9.pe.kr\/wordpress\/wp-content\/uploads\/2021\/08\/2021-08-14-02.png\"><img decoding=\"async\" src=\"https:\/\/www.skyer9.pe.kr\/wordpress\/wp-content\/uploads\/2021\/08\/2021-08-14-02.png\" alt=\"\" \/><\/a><\/p>\n<p>\uc124\uc815\uc744 \uac80\uc99d\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\"># \ud14c\ub77c\ud3fc \ucd08\uae30\ud654\nterraform init\n\n# \uc124\uc815\ud30c\uc77c \uac80\uc99d\nterraform validate\n\n# \uc2e4\ud589\uc2dc \uacb0\uacfc\ubb3c \ucd9c\ub825\nterraform plan<\/code><\/pre>\n<p>\uc2e4\uc81c\ub85c \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">terraform apply<\/code><\/pre>\n<p>\uc544\ub798 \uba85\ub839\uc73c\ub85c \uc0dd\uc131\ub41c \uc0c1\ud0dc\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.<br \/>\nEC2 \uc778\uc2a4\ud134\uc2a4\ub294 \uc0dd\uc131\ub418\uc5c8\uc9c0\ub9cc, \uc544\ubb34 \uc11c\ube44\uc2a4\ub3c4 \uc5c6\uace0,<br \/>\n\uc2ec\uc9c0\uc5b4 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc811\uc18d\ud560 ssh \uc811\uadfc \uad8c\ud55c\ub3c4 \uc5c6\uc2b5\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">terraform show<\/code><\/pre>\n<p>\uc0dd\uc131\ub41c \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ad\uc81c\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">terraform destroy<\/code><\/pre>\n<h2>EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub3c4\ucee4 \uc774\ubbf8\uc9c0 \uc2e4\ud589<\/h2>\n<h3>\ubaa9\ud45c<\/h3>\n<p>EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud558\uace0, nginx \uc774\ubbf8\uc9c0\ub97c \uc2e4\ud589\ud558\uace0, \ud37c\ube14\ub9ad\uc73c\ub85c 8080 \ud3ec\ud2b8\ub97c \uc624\ud508\ud569\ub2c8\ub2e4.<\/p>\n<h3>EC2 \uc778\uc2a4\ud134\uc2a4 \uc0dd\uc131<\/h3>\n<pre><code class=\"language-bash\">vi main.tf\n-------------------------------------\nprovider &quot;aws&quot; {\n  profile = &quot;default&quot;\n  region  = &quot;ap-northeast-2&quot;\n}\n\nresource &quot;aws_instance&quot; &quot;my_server&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n\n  tags = {\n    Name = &quot;my webserver&quot;\n  }\n}\n-------------------------------------<\/code><\/pre>\n<h3>nginx \uc774\ubbf8\uc9c0 \uc2e4\ud589<\/h3>\n<p><a href=\"https:\/\/medium.com\/avmconsulting-blog\/provisioned-nginx-webserver-with-terraform-b7c205000ae5\">\ucc38\uace0<\/a><\/p>\n<pre><code class=\"language-bash\">vi main.tf\n-------------------------------------\nprovider &quot;aws&quot; {\n  profile = &quot;default&quot;\n  region  = &quot;ap-northeast-2&quot;\n}\n\nresource &quot;aws_instance&quot; &quot;my_server&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n\n  user_data = &lt;&lt;-EOF\n    #! \/bin\/bash\n    sudo yum update -y\n    sudo yum install docker -y\n    sudo service docker start\n    sudo systemctl enable docker.service\n    sudo mkdir -p \/var\/web\n    echo &quot;&lt;h1&gt;Hello, World.&lt;\/h1&gt;&quot; | sudo tee \/var\/web\/index.html\n    sudo docker run --name nginx \\\n                    -v \/var\/web:\/usr\/share\/nginx\/html \\\n                    -d \\\n                    -p 8080:80 \\\n                    nginx\n  EOF\n\n  tags = {\n    Name = &quot;my webserver&quot;\n  }\n}\n-------------------------------------<\/code><\/pre>\n<h3>8080 \ud3ec\ud2b8 \uc624\ud508<\/h3>\n<p><a href=\"https:\/\/klotzandrew.com\/blog\/deploy-an-ec2-to-run-docker-with-terraform\">\ucc38\uace0<\/a><\/p>\n<p><code>sg_ssh<\/code>, <code>key_name<\/code> \ub294 \ud14c\uc2a4\ud2b8 \uc6a9\ub3c4\ub85c\ub9cc \uc0ac\uc6a9\ud558\uace0 \uc774\ud6c4\uc5d0\ub294 \uc0ad\uc81c\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">vi main.tf\n-------------------------------------\nprovider &quot;aws&quot; {\n  profile = &quot;default&quot;\n  region  = &quot;ap-northeast-2&quot;\n}\n\nresource &quot;aws_instance&quot; &quot;my_server&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n  key_name = &quot;skyer9&quot;  # \uc790\uc2e0\uc758 \ud0a4 \ud398\uc5b4\uc785\ub825\n\n  vpc_security_group_ids = [\n    aws_security_group.sg_web.id,\n    aws_security_group.sg_ssh.id\n  ]\n\n  user_data = &lt;&lt;-EOF\n    #! \/bin\/bash\n    sudo yum update -y\n    sudo yum install docker -y\n    sudo service docker start\n    sudo systemctl enable docker.service\n    sudo mkdir -p \/var\/web\n    echo &quot;&lt;h1&gt;Hello, World.&lt;\/h1&gt;&quot; | sudo tee \/var\/web\/index.html\n    sudo docker run --name nginx \\\n                    -v \/var\/web:\/usr\/share\/nginx\/html \\\n                    -d \\\n                    -p 8080:80 \\\n                    nginx\n  EOF\n\n  tags = {\n    Name = &quot;my webserver&quot;\n  }\n}\n\nresource &quot;aws_security_group&quot; &quot;sg_web&quot; {\n  name = &quot;sg_web&quot;\n  ingress {\n    from_port   = &quot;8080&quot;\n    to_port     = &quot;8080&quot;\n    protocol    = &quot;tcp&quot;\n    cidr_blocks = [&quot;0.0.0.0\/0&quot;]\n  }\n\n  # \ud14c\ub77c\ud3fc\uc774 \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud5c8\uc6a9\uc744 \uc0ad\uc81c\ud558\ubbc0\ub85c, \ub2e4\uc2dc \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4.\n  egress {\n    from_port   = 0\n    to_port     = 0\n    protocol    = &quot;-1&quot;\n    cidr_blocks = [&quot;0.0.0.0\/0&quot;]\n  }\n}\n\nresource &quot;aws_security_group&quot; &quot;sg_ssh&quot; {\n  name = &quot;sg_ssh&quot;\n  ingress {\n    from_port   = &quot;22&quot;\n    to_port     = &quot;22&quot;\n    protocol    = &quot;tcp&quot;\n    cidr_blocks = [&quot;0.0.0.0\/0&quot;]\n  }\n}\n-------------------------------------<\/code><\/pre>\n<h3>\uc0dd\uc131<\/h3>\n<pre><code class=\"language-bash\">terraform init\nterraform validate\nterraform plan\n\nterraform apply<\/code><\/pre>\n<p>nginx \uac00 \uc2e4\ud589\ub420 \ub54c\uae4c\uc9c0, \uc778\uc2a4\ud134\uc2a4 \uc0dd\uc131 \ud6c4\uc5d0\ub3c4 30~60\ucd08 \uc815\ub3c4 \ucd94\uac00\ub85c \uc18c\uc694\ub429\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-bash\">terraform destroy<\/code><\/pre>\n<h2>nomad \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131<\/h2>\n<h3>\ubaa9\ud45c<\/h3>\n<p><code>Consul<\/code> \uc11c\ubc84 1\uac1c \uc0dd\uc131 \ud6c4, \ud560\ub2f9\ubc1b\uc740 \ud504\ub77c\uc774\ube57 \uc544\uc774\ud53c\ub85c nomad \uc11c\ubc84 1\uac1c\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.<\/p>\n<p>\ub2e4\uc2dc nomad \uc11c\ubc84\uc758 \ud504\ub77c\uc774\ube57 \uc544\uc774\ud53c\ub97c \ud30c\ub77c\ubbf8\ud130\ub85c nomad \ud074\ub77c\uc774\uc5b8\ud2b8\ub97c 1\uac1c \uc0dd\uc131\ud569\ub2c8\ub2e4.<\/p>\n<h3>Security Group \uc0dd\uc131<\/h3>\n<ul>\n<li>sg_outbound : \uc544\uc6c3 \ubc14\uc6b4\ub4dc \ud5c8\uc6a9<\/li>\n<li>sg_allow_nomad\/sg_protect_nomad : nomad \uc11c\ubc84 \ud5c8\uc6a9<\/li>\n<li>sg_allow_me : \ub0b4 \uc544\uc774\ud53c \ud5c8\uc6a9<\/li>\n<\/ul>\n<pre><code class=\"language-terraform\">resource &quot;aws_security_group&quot; &quot;sg_outbound&quot; {\n  name = &quot;sg_outbound&quot;\n  egress {\n    from_port   = 0\n    to_port     = 0\n    protocol    = &quot;-1&quot;\n    cidr_blocks = [&quot;0.0.0.0\/0&quot;]\n  }\n}\n\nresource &quot;aws_security_group&quot; &quot;sg_allow_nomad&quot; {\n  name = &quot;sg_allow_nomad&quot;\n}\n\nresource &quot;aws_security_group&quot; &quot;sg_protect_nomad&quot; {\n  name = &quot;sg_protect_nomad&quot;\n  ingress {\n    from_port   = 0\n    to_port     = 0\n    protocol    = &quot;-1&quot;\n    security_groups = [ &quot;${aws_security_group.sg_allow_nomad.id}&quot; ]\n  }\n}\n\nresource &quot;aws_security_group&quot; &quot;sg_allow_me&quot; {\n  name = &quot;sg_allow_me&quot;\n  ingress {\n    from_port   = 0\n    to_port     = 0\n    protocol    = &quot;-1&quot;\n    cidr_blocks = [&quot;183.101.XXX.XXX\/32&quot;]    # \ub0b4 \uc544\uc774\ud53c \ud5c8\uc6a9\n  }\n}<\/code><\/pre>\n<h3>Consul \uc11c\ubc84 \uc0dd\uc131<\/h3>\n<p>http:\/\/&lt;Consul \uc11c\ubc84 \ud37c\ube14\ub9ad \uc544\uc774\ud53c&gt;:8500 \uc5d0 \uc811\uc18d\ud574\uc11c \uc0dd\uc131\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.<\/p>\n<p>\uc0dd\uc131\ud655\uc778 \ud6c4\uc5d0\ub294 <code>key_name<\/code> \uacfc <code>aws_security_group.sg_allow_me.id<\/code> \ub294 \uc81c\uac70\ud574 \uc90d\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-terraform\">resource &quot;aws_instance&quot; &quot;consul_server&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n  key_name      = &quot;skyer9&quot;  # \uc790\uc2e0\uc758 \ud0a4 \ud398\uc5b4\uc785\ub825\n\n  vpc_security_group_ids = [\n    aws_security_group.sg_outbound.id,\n    aws_security_group.sg_allow_nomad.id,\n    aws_security_group.sg_protect_nomad.id,\n    aws_security_group.sg_allow_me.id\n  ]\n\n  user_data = &lt;&lt;-EOF\n    #! \/bin\/bash\n    sudo yum update -y\n    sudo yum install docker -y\n    sudo service docker start\n    sudo systemctl enable docker.service\n    sudo docker run -d --name consul -p 8500:8500 -p 8600:8600\/udp consul\n  EOF\n\n  tags = {\n    Name = &quot;my consul server&quot;\n  }\n}<\/code><\/pre>\n<h3>nomad \uc11c\ubc84 \uc0dd\uc131<\/h3>\n<pre><code class=\"language-bash\">vi aws\/userdata_nomad_server.sh\n-------------------------------------\n#! \/bin\/bash\n\nsudo yum update -y\n# sudo yum install docker -y\n# sudo service docker start\n# sudo systemctl enable docker.service\n\nwget https:\/\/releases.hashicorp.com\/nomad\/1.1.3\/nomad_1.1.3_linux_amd64.zip\nunzip nomad_1.1.3_linux_amd64.zip\nsudo chown root:root nomad\nsudo chmod 777 nomad\nsudo mv nomad \/usr\/bin\/\n\nsudo mkdir \/etc\/nomad.d\n\nsudo bash -c &#039;cat &lt;&lt; EOF &gt; \/etc\/nomad.d\/server.hcl\ndatacenter = &quot;dc1&quot;\ndata_dir   = &quot;\/var\/lib\/nomad\/&quot;\nbind_addr  = &quot;0.0.0.0&quot;\n\nserver {\n  enabled          = true\n  bootstrap_expect = 1\n}\n\nconsul {\n  address = &quot;CONSUL_SERVER_IP:8500&quot;\n}\nEOF&#039;\n\nsudo bash -c &#039;cat &lt;&lt; EOF &gt; \/lib\/systemd\/system\/nomad.service\n[Unit]\nDescription=Nomad\nDocumentation=https:\/\/nomadproject.io\/docs\/\nWants=network-online.target\nAfter=network-online.target\n\n# When using Nomad with Consul it is not necessary to start Consul first. These\n# lines start Consul before Nomad as an optimization to avoid Nomad logging\n# that Consul is unavailable at startup.\n#Wants=consul.service\n#After=consul.service\n\n[Service]\nExecReload=\/bin\/kill -HUP $MAINPID\nExecStart=\/usr\/bin\/nomad agent -config \/etc\/nomad.d\nKillMode=process\nKillSignal=SIGINT\nLimitNOFILE=65536\nLimitNPROC=infinity\nRestart=on-failure\nRestartSec=2\nStartLimitBurst=3\nStartLimitIntervalSec=10\nTasksMax=infinity\nOOMScoreAdjust=-1000\n\n[Install]\nWantedBy=multi-user.target\nEOF&#039;\n\nsudo systemctl daemon-reload\nsudo systemctl enable nomad\nsudo systemctl start nomad\n-------------------------------------<\/code><\/pre>\n<p><code>depends_on<\/code> \uc73c\ub85c \uc758\uc874\uc131(\uc0dd\uc131\uc21c\uc11c) \uc744 \uac15\uc81c\ud569\ub2c8\ub2e4.<\/p>\n<p><code>aws_instance.consul_server.private_ip<\/code> \uc744 \uc774\uc6a9\ud574 Consul Server \uc758 \ud504\ub77c\uc774\ube57 \uc544\uc774\ud53c\ub97c \uac00\uc838\uc635\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-terraform\">resource &quot;aws_instance&quot; &quot;nomad_server&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n  key_name      = &quot;skyer9&quot;  # \uc790\uc2e0\uc758 \ud0a4 \ud398\uc5b4\uc785\ub825\n  # count         = 5       # \uc778\uc2a4\ud134\uc2a4\ub97c 5\uac1c \uc0dd\uc131\n\n  vpc_security_group_ids = [\n    aws_security_group.sg_outbound.id,\n    aws_security_group.sg_allow_nomad.id,\n    aws_security_group.sg_protect_nomad.id,\n    aws_security_group.sg_allow_me.id\n  ]\n\n  user_data = &quot;${replace(file(&quot;.\/aws\/userdata_nomad_server.sh&quot;), &quot;CONSUL_SERVER_IP&quot;, aws_instance.consul_server.private_ip)}&quot;\n\n  tags = {\n    Name = &quot;my nomad server&quot;\n  }\n\n  depends_on = [ aws_instance.consul_server ]\n}<\/code><\/pre>\n<h3>nomad \ud074\ub77c\uc774\uc5b8\ud2b8 \uc0dd\uc131<\/h3>\n<pre><code class=\"language-bash\">vi aws\/userdata_nomad_client.sh\n-------------------------------------\n#! \/bin\/bash\n\nsudo yum update -y\nsudo yum install docker -y\nsudo service docker start\nsudo systemctl enable docker.service\n\nsudo yum install java-11-amazon-corretto-headless -y\n\nwget https:\/\/releases.hashicorp.com\/nomad\/1.1.3\/nomad_1.1.3_linux_amd64.zip\nunzip nomad_1.1.3_linux_amd64.zip\nsudo chown root:root nomad\nsudo chmod 777 nomad\nsudo mv nomad \/usr\/bin\/\n\nsudo mkdir \/etc\/nomad.d\n\nsudo bash -c &#039;cat &lt;&lt; EOF &gt; \/etc\/nomad.d\/client.hcl\n# bind_addr = &quot;127.0.0.1&quot;\n\ndatacenter = &quot;dc1&quot;           # \ud074\ub7ec\uc2a4\ud130\uba85\ndata_dir = &quot;\/var\/lib\/nomad\/&quot;\n\nclient {\n    enabled = true\n    servers = [&quot;NOMAD_SERVER_IP&quot;]   # \ub178\ub9c8\ub4dc \uc11c\ubc84 \ud504\ub77c\uc774\ube57 \uc544\uc774\ud53c\n}\nEOF&#039;\n\nsudo bash -c &#039;cat &lt;&lt; EOF &gt; \/lib\/systemd\/system\/nomad.service\n[Unit]\nDescription=Nomad\nDocumentation=https:\/\/nomadproject.io\/docs\/\nWants=network-online.target\nAfter=network-online.target\n\n# When using Nomad with Consul it is not necessary to start Consul first. These\n# lines start Consul before Nomad as an optimization to avoid Nomad logging\n# that Consul is unavailable at startup.\n#Wants=consul.service\n#After=consul.service\n\n[Service]\nExecReload=\/bin\/kill -HUP $MAINPID\nExecStart=\/usr\/bin\/nomad agent -config \/etc\/nomad.d\nKillMode=process\nKillSignal=SIGINT\nLimitNOFILE=65536\nLimitNPROC=infinity\nRestart=on-failure\nRestartSec=2\nStartLimitBurst=3\nStartLimitIntervalSec=10\nTasksMax=infinity\nOOMScoreAdjust=-1000\nEnvironment=&quot;HOME=\/root&quot;\n\n[Install]\nWantedBy=multi-user.target\nEOF&#039;\n\nsudo systemctl daemon-reload\nsudo systemctl enable nomad\nsudo systemctl start nomad\n-------------------------------------<\/code><\/pre>\n<pre><code class=\"language-terraform\">resource &quot;aws_instance&quot; &quot;nomad_client&quot; {\n  ami           = &quot;ami-0a0de518b1fc4524c&quot;\n  instance_type = &quot;t2.nano&quot;\n  key_name      = &quot;skyer9&quot;  # \uc790\uc2e0\uc758 \ud0a4 \ud398\uc5b4\uc785\ub825\n\n  vpc_security_group_ids = [\n    aws_security_group.sg_outbound.id,\n    aws_security_group.sg_allow_nomad.id,\n    aws_security_group.sg_protect_nomad.id,\n    aws_security_group.sg_allow_me.id\n  ]\n\n  # nomad \uc11c\ubc84\uac00 1\ub300\uc778 \uacbd\uc6b0\n  user_data = &quot;${replace(file(&quot;.\/aws\/userdata_nomad_client.sh&quot;), &quot;NOMAD_SERVER_IP&quot;, aws_instance.nomad_server.private_ip)}&quot;\n  # 2\ub300 \uc774\uc0c1\uc778 \uacbd\uc6b0\n  # user_data = &quot;${replace(file(&quot;.\/aws\/userdata_nomad_client.sh&quot;), &quot;NOMAD_SERVER_IP&quot;, join(&quot;\\&quot;,\\&quot;&quot;, aws_instance.nomad_server.*.private_ip))}&quot;\n\n  tags = {\n    Name = &quot;my nomad client&quot;\n  }\n\n  depends_on = [ aws_instance.nomad_server ]\n}<\/code><\/pre>\n<h2>Auto Scaling<\/h2>\n<p><a href=\"https:\/\/github.com\/hashicorp\/nomad-autoscaler-demos\/tree\/main\/cloud\/shared\/terraform\/modules\/shared-nomad-jobs\/files\">\ucc38\uc870<\/a><\/p>\n<h3>\ubaa9\ud45c<\/h3>\n<p>nomad client \ub178\ub4dc\uc5d0 \ub300\ud574 Auto Scaling \uad6c\ud604\ud569\ub2c8\ub2e4.<\/p>\n<p>nomad \uc5d0\uc11c\uc758 Auto Scaling \uc740 <code>Application Level<\/code>, <code>Cluster Level<\/code> \ub450\uac00\uc9c0\uc758 Auto Scaling \uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.<\/p>\n<h3>Nomad Autoscaler \uc124\uce58<\/h3>\n<p>Nomad Autoscaler \ub3c5\ub9bd\uc11c\ubc84\ub85c \uc124\uce58\ud560 \uc218\ub3c4 \uc788\uc9c0\ub9cc, Job \uc73c\ub85c \ub4f1\ub85d\ud558\ub294 \uac83\uc774 \uc77c\ubc18\uc801\uc785\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-nomad\">vi prometheus.nomad\n------------------------------\njob &quot;prometheus&quot; {\n  datacenters = [&quot;dc1&quot;]\n\n  group &quot;prometheus&quot; {\n    count = 1\n\n    network {\n      port &quot;prometheus_ui&quot; {}\n    }\n\n    task &quot;prometheus&quot; {\n      driver = &quot;docker&quot;\n\n      config {\n        image = &quot;prom\/prometheus:v2.25.0&quot;\n        ports = [&quot;prometheus_ui&quot;]\n\n        args = [\n          &quot;--config.file=\/etc\/prometheus\/config\/prometheus.yml&quot;,\n          &quot;--storage.tsdb.path=\/prometheus&quot;,\n          &quot;--web.listen-address=0.0.0.0:${NOMAD_PORT_prometheus_ui}&quot;,\n          &quot;--web.console.libraries=\/usr\/share\/prometheus\/console_libraries&quot;,\n          &quot;--web.console.templates=\/usr\/share\/prometheus\/consoles&quot;,\n        ]\n\n        volumes = [\n          &quot;local\/config:\/etc\/prometheus\/config&quot;,\n        ]\n      }\n\n      template {\n        data = &lt;&lt;EOH\n---\nglobal:\n  scrape_interval:     1s\n  evaluation_interval: 1s\nscrape_configs:\n  - job_name: nomad\n    scrape_interval: 10s\n    metrics_path: \/v1\/metrics\n    params:\n      format: [&#039;prometheus&#039;]\n    consul_sd_configs:\n    - server: &#039;{{ env &quot;attr.unique.network.ip-address&quot; }}:8500&#039;\n      services: [&#039;nomad&#039;,&#039;nomad-client&#039;]\n    relabel_configs:\n      - source_labels: [&#039;__meta_consul_tags&#039;]\n        regex: .*,http,.*\n        action: keep\n  - job_name: traefik\n    metrics_path: \/metrics\n    consul_sd_configs:\n    - server: &#039;{{ env &quot;attr.unique.network.ip-address&quot; }}:8500&#039;\n      services: [&#039;traefik-api&#039;]\nEOH\n\n        change_mode   = &quot;signal&quot;\n        change_signal = &quot;SIGHUP&quot;\n        destination   = &quot;local\/config\/prometheus.yml&quot;\n      }\n\n      resources {\n        cpu    = 100\n        memory = 256\n      }\n\n      service {\n        name = &quot;prometheus&quot;\n        port = &quot;prometheus_ui&quot;\n\n        tags = [\n          &quot;traefik.enable=true&quot;,\n          &quot;traefik.tcp.routers.prometheus.entrypoints=prometheus&quot;,\n          &quot;traefik.tcp.routers.prometheus.rule=HostSNI(`*`)&quot;\n        ]\n\n        check {\n          type     = &quot;http&quot;\n          path     = &quot;\/-\/healthy&quot;\n          interval = &quot;10s&quot;\n          timeout  = &quot;2s&quot;\n        }\n      }\n    }\n  }\n}\n------------------------------<\/code><\/pre>\n<pre><code class=\"language-bash\">nomad run prometheus.nomad<\/code><\/pre>\n<pre><code class=\"language-nomad\">vi autoscaler.nomad\n------------------------------\njob &quot;autoscaler&quot; {\n  datacenters = [&quot;dc1&quot;]\n\n  group &quot;autoscaler&quot; {\n    count = 1\n\n    task &quot;autoscaler&quot; {\n      driver = &quot;docker&quot;\n\n      config {\n        image   = &quot;hashicorp\/nomad-autoscaler:0.3.3&quot;\n        command = &quot;nomad-autoscaler&quot;\n        args    = [&quot;agent&quot;, &quot;-config&quot;, &quot;${NOMAD_TASK_DIR}\/config.hcl&quot;]\n      }\n\n      template {\n        data = &lt;&lt;EOF\nplugin_dir = &quot;\/plugins&quot;\n\nnomad {\n  address = &quot;http:\/\/{{env &quot;attr.unique.network.ip-address&quot; }}:4646&quot;\n}\napm &quot;nomad&quot; {\n  driver = &quot;nomad-apm&quot;\n  config  = {\n    address = &quot;http:\/\/{{env &quot;attr.unique.network.ip-address&quot; }}:4646&quot;\n  }\n}\napm &quot;prometheus&quot; {\n  driver = &quot;prometheus&quot;\n  config = {\n    address = &quot;http:\/\/{{ env &quot;attr.unique.network.ip-address&quot; }}:9090&quot;\n  }\n}\nstrategy &quot;target-value&quot; {\n  driver = &quot;target-value&quot;\n}\n          EOF\n\n        destination = &quot;${NOMAD_TASK_DIR}\/config.hcl&quot;\n      }\n    }\n  }\n}\n------------------------------<\/code><\/pre>\n<pre><code class=\"language-bash\">nomad run autoscaler.nomad<\/code><\/pre>\n<h3>Application Level<\/h3>\n<pre><code class=\"language-bash\">vi hello.nomad<\/code><\/pre>\n<pre><code class=\"language-nomad\">job &quot;hello&quot; {\n  datacenters = [&quot;dc1&quot;]\n  type = &quot;service&quot;\n\n  group &quot;helloGroup&quot; {\n    network {\n      port &quot;http&quot; {}\n      port &quot;https&quot; {}\n      # port &quot;lb&quot; { static = 8080 }\n    }\n\n    count = 1\n\n    scaling {\n      enabled = true\n      min     = 1\n      max     = 2\n\n      policy {\n        cooldown            = &quot;1m&quot;\n        evaluation_interval = &quot;30s&quot;\n\n        check &quot;avg_sessions&quot; {\n          source = &quot;prometheus&quot;\n          query  = &quot;sum(traefik_entrypoint_open_connections{entrypoint=\\&quot;webapp\\&quot;})\/scalar(nomad_nomad_job_summary_running{task_group=\\&quot;demo\\&quot;})&quot;\n\n          strategy &quot;target-value&quot; {\n            target = 10\n          }\n        }\n      }\n    }\n\n    # Define a task to run\n    task &quot;helloTask&quot; {\n      driver = &quot;java&quot;\n\n      config {\n        jar_path = &quot;local\/TestPublic-0.0.2-SNAPSHOT.jar&quot;\n        jvm_options = [&quot;-Xmx128m&quot;,&quot;-Xms128m&quot;]\n      }\n\n      env {\n        PORT    = &quot;${NOMAD_PORT_http}&quot;\n        NODE_IP = &quot;${NOMAD_IP_http}&quot;\n      }\n\n      service {\n        name = &quot;helloService&quot;\n        # port = &quot;lb&quot;\n        port = &quot;http&quot;\n\n        check {\n          type     = &quot;http&quot;\n          path     = &quot;\/hello&quot;     # health check \uc6a9 url\n          interval = &quot;2s&quot;\n          timeout  = &quot;2s&quot;\n        }\n      }\n\n      resources {\n        cpu = 500         # 500 Mhz\n        memory = 200      # 200 MB\n      }\n\n      # \uc6d0\uaca9\uc5d0\uc11c \ub2e4\uc6b4\ubc1b\uc544\uc57c \ud569\ub2c8\ub2e4.\n      artifact {\n        source = &quot;https:\/\/github.com\/skyer9\/TestPublic\/raw\/master\/TestPublic-0.0.2-SNAPSHOT.jar&quot;\n      }\n    }\n  }\n}<\/code><\/pre>\n<pre><code class=\"language-bash\">nomad run hello.nomad<\/code><\/pre>\n<h3>Cluster Level<\/h3>\n<p><a href=\"https:\/\/github.com\/hashicorp\/learn-terraform-aws-default-tags\/blob\/main\/main.tf\">\ucc38\uc870<\/a><\/p>\n<p><a href=\"https:\/\/github.com\/hashicorp\/nomad-autoscaler-demos\/tree\/learn\/cloud\/aws\">\ucc38\uc870<\/a><\/p>\n<h2>Auto Scaling + Load Balancing<\/h2>\n<h3>\ubaa9\ud45c<\/h3>\n<p>nomad client \uc5d0 <code>Auto Scaling + Load Balancing<\/code> \uae30\ub2a5\uc744 \ubd80\uc5ec\ud569\ub2c8\ub2e4.<\/p>\n<pre><code class=\"language-terraform\">data &quot;aws_availability_zones&quot; &quot;available&quot; {\n  state = &quot;available&quot;\n}<\/code><\/pre>\n<pre><code class=\"language-terraform\">```\n\n```terraform<\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-terraform\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n<pre><code class=\"language-bash\"><\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Terraform \uc2dc\uc791\ud558\uae30 \uacf5\uc2dd\ubb38\uc11c The One Million Container Challenge c1m github The Two Million Container Challenge c2m github IntelliJ Plugin \uc124\uce58 \uc791\uc5c5\uc6a9 Terraform \uc11c\ubc84 \uc0dd\uc131 Terraform \uc124\uce58 AWS AMI \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \uba54\ubaa8\ub9ac\ub294 500M \uba74 \ucda9\ubd84\ud569\ub2c8\ub2e4. Terraform \uc744 \uc124\uce58\ud569\ub2c8\ub2e4. sudo yum install -y yum-utils sudo yum-config-manager &#8211;add-repo https:\/\/rpm.releases.hashicorp.com\/AmazonLinux\/hashicorp.repo sudo yum -y install terraform terraform -help IAM \uad8c\ud55c\u2026 <span class=\"read-more\"><a href=\"https:\/\/www.skyer9.pe.kr\/wordpress\/?p=2773\">Read More &raquo;<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30],"tags":[],"class_list":["post-2773","post","type-post","status-publish","format-standard","hentry","category-terraform"],"_links":{"self":[{"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/2773","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2773"}],"version-history":[{"count":58,"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/2773\/revisions"}],"predecessor-version":[{"id":3151,"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/2773\/revisions\/3151"}],"wp:attachment":[{"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2773"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2773"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.skyer9.pe.kr\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2773"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}