This is the user data used:
#!/bin/bash
yum install httpd -y
yum update -y
aws s3 cp s3://YOURBUCKETNAMEHERE/index.html /var/www/html/
service httpd start
chkconfig httpd on
NAT gateway is configured for the private EC2 instance and also s3fullaccess permissions are given.
Please help me troubleshoot!
You can add some code to the start of your user-data script to redirect the output to logs.
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
Then you can use those logs to troubleshoot from the AWS Console. Select the instance, then Actions menu -> Instance settings -> Get system log. Here is more documentation on what to add to your bash script, as well as a video that shows where to find the logs.
Related
I am trying to run a command using ssh in a GCP VM in airflow via the SSHOperator as described here:
ssh_to_vm_task = SSHOperator(
task_id="ssh_to_vm_task",
ssh_hook=ComputeEngineSSHHook(
instance_name=<MYINSTANCE>,
project_id=<MYPROJECT>,
zone=<MYZONE>,
use_oslogin=False,
use_iap_tunnel=True,
use_internal_ip=False
),
command="echo test_message",
dag=dag
)
However, I get a airflow.exceptions.AirflowException: SSH operator error: [Errno 2] No such file or directory: 'gcloud' error.
Docker is installed via docker-compose following these instructions.
Other Airflow GCP operators (such as BigQueryCheckOperator) work correctly. So at first sight it does not seem like a configuration problem.
Could you please help me? Is this a bug?
It seems the issue is that gcloud was not installed in the docker container by default. This has been solved by following instructions in here: it is necessary to add
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
to the dockerfile that is used to install airflow / install dependencies.
Check if the TCP port 22 is allowed through the firewall on your GCP VM instance, and make sure that the VM instance also allows SSH access and is properly configured in that VM instance. Furthermore, be sure that the IP from which you are trying to SSH at the VM instance is whitelisted through the firewall.
You can use the following command in GCP to check the ingress firewall rule for the network that contains the destination VM instance. Additionally, you can consult this [link]for more information.
This is an example of what you have to do.
´´´
gcloud compute firewall-rules list --filter network=[NETWORK-NAME] \
--filter INGRESS \
--sort-by priority \
--format="table(
name,
network,
direction,
priority,
sourceRanges.list():label=SRC_RANGES,
destinationRanges.list():label=DEST_RANGES,
allowed[].map().firewall_rule().list():label=ALLOW,
denied[].map().firewall_rule().list():label=DENY,
sourceTags.list():label=SRC_TAGS,
sourceServiceAccounts.list():label=SRC_SVC_ACCT,
targetTags.list():label=TARGET_TAGS,
targetServiceAccounts.list():label=TARGET_SVC_ACCT
)"
´´´
I have created a Wordpress Service using Cloud Run . I deployed using below command
gcloud beta run deploy wp --image gcr.io/<project>/wp:v1 \
--add-cloudsql-instances <project>:us-central1:mysql2 \
--update-env-vars DB_HOST='127.0.0.1',DB_NAME=mysql2,DB_USER=wordpress,DB_PASSWORD=password,CLOUDSQL_INSTANCE='<project>:us-central1:mysql2'
The service is deployed fine but while trying to access the service it is showing below error
<h1>Error: Forbidden</h1>
<h2>Your client does not have permission to get URL <code>/</code> from this server.</h2>
UPDATES:
Dockerfile is as follows . I am following this...
https://github.com/acadevmy/cloud-run-wordpress
FROM wordpress:5.2.1-php7.3-apache
EXPOSE 80
# Use the PORT environment variable in Apache configuration files.
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
# wordpress conf
COPY wordpress/wp-config.php /var/www/html/wp-config.php
# download and install cloud_sql_proxy
RUN apt-get update && apt-get -y install net-tools wget && \
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O /usr/local/bin/cloud_sql_proxy && \
chmod +x /usr/local/bin/cloud_sql_proxy
COPY wordpress/cloud-run-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/local/sbin/apache2ctl -D FOREGROUND"]
##docker-entrypoint.sh
#!/usr/bin/env bash
# Start the sql proxy
cloud_sql_proxy -instances=$CLOUDSQL_INSTANCE=tcp:3306 &
# Execute the rest of your ENTRYPOINT and CMD as expected.
Following can be seen in Console Log
We allowed Unauthenticated authentication and now the error is
"Error establishing a database connection"
Additional Updates:
The DB is running with a private IP so using Serverless VPC .
DB information is as follows:
gcloud sql instances list
NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS
mysql2 MYSQL_5_7 us-central1-b db-f1-micro - 10.0.100.5 RUNNABLE
This is Serverless VPC access range
testserverlessvpc kube-shared-vpc us-central1 192.168.60.0/28 200 300
Now I have added an additional parameter as shown below with both gcloud run deploy and gcloud run service command
--vpc-connector projects/< HOST-Project >/locations/us-central1/connectors/testserverlessvpc
But during gcloud run deploy it is failing with below error
⠏ Deploying new service... Internal system error, system will retry.
I have an ECS cluster defined in AWS and an Auto Scaling Group that I use to add/remove instance to handle tasks as necessary. I have the ASG setup so that it is creating the EC2 instance at the appropriate time, but it won't connect to the ECS Cluster unless I manually go in and disable/enable the ECS service.
I am using the Amazon Linux 2 ami on the EC2 machines and everything is in the same region/account etc.
I have included my user data below.
#!/bin/bash
yum update -y
amazon-linux-extras disable docker
amazon-linux-extras install -y ecs
echo "ECS_CLUSTER={CLUSTERNAME}" >> /etc/ecs/ecs.config
systemctl enable --now ecs
As mentioned this installs the ECS service and sets the config file properly but the enable doesn't actually connect the machine, but running the same disable/enable commands on the machine once running connects without problem. What am I missing?
First thing, the correct syntax is
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAMe" >> /etc/ecs/ecs.config
Once you update the config better to restart the ECS agent.
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAME" >> /etc/ecs/ecs.config
sudo yum update -y ecs-init
#this will update ECS agent, better when using custom AMI
/usr/bin/docker pull amazon/amazon-ecs-agent:latest
#Restart docker and ECS agent
sudo service docker restart
sudo start ecs
I ended up solving this using the old adage, turn it off and on again.
e.g. I added shutdown -r 0 to the bottom of the user data script to restart the machine after it was "configured" and it connected right now.
I have an AWS ECS cluster defined with a service that uses Replica service type. It creates an EC2 isntance with a docker container. I can access it through browser and all this stuff...
The issue is that I have to connect through ssh to the EC2 instance and run:
sudo yum update -y
sudo yum install-y ruby
sudo yum install-y wget
cd /home/ec2-user
wget https://aws-codedeploy-eu-west-1.s3.eu-west-1.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
It install codedeploy agent, so I can connect github to the instance and CI/CD code.
I would like to set up this automatically in every server that the ECS definition creates. For example if i stop the EC2 instance, the cluster raises a new EC2 instance, which doesn't have this agent...
I saw that I should configure your Amazon ECS container instance with user data, but first of all is that I am not able to find this option, and I am not quite sure if it runs into the EC2 isntance or in the docker itself.
Based on the comments.
The solution was to use Launch Template or Launch Configurations.
I was creating new AWS EC2 instance, in step 1 I selected AMI Linux Image, In Step 2 after some basic details, I provided following advance details
#!/bin/bash
yum install httpd -y
yum update -y
service httpd start
chkconfig httpd on
echo "<html><h1>Hello Test Page!</h1></html>" > /var/www/html/index.html
Somehow this script did not execute after EC2 instance was ready. I have following questions,
Can we get log of what exactly happen in executing this script?
Also from console is it possible to get what values were specified in Advance details while setup an EC2 instance.
Login into your EC2 instance and check /var/log/cloud-init-output.log for any errors.
To check the user-data specified, I don't think you can see it on the console. But you can verify it using http://169.254.169.254/latest/user-data/ after logging into EC2