I am currently unable to access my RDS environment variables in the EC2 instance. They are both linked using Elastic Beanstalk.
I am trying to use the RDS environment variables in a PHP script using the $_SERVER global variable but every time I check on the console these are always empty strings. Also if I run echo ${RDS_HOSTNAME} on the console I also get an empty string.
However when I run /opt/elasticbeanstalk/bin/get-config environment I get the following with the correct credentials.
{
"COMPOSER_HOME":"/root",
"RDS_DB_NAME":"dbname",
"RDS_HOSTNAME":"dbhost.rds.amazonaws.com",
"RDS_PASSWORD":"dbpassword",
"RDS_PORT":"3306",
"RDS_USERNAME":"dbusername"
}
I've also connected to the database via the mysql command just to make sure that the EC2 instance can access the RDS database and it worked.
Using mysql -u dbusername -h dbhost.rds.amazonaws.com -p
Related
Hi need to transfer a file to ec2 machine via ssm agent. I have successfully installed ssm-agent in ec2 instances and from UI i am able to start session via "session-manager" and login to the shell of that ec2 machine.
Now I tried to automate it via boto3 and using the below code,
ssm_client = boto3.client('ssm', 'us-west-2')
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': ['echo "hello world" >> /tmp/test.txt']},
InstanceIds=['i-xxxxx'],
)
The above works fine and i am able to send create a file called test.txt in remote machine but his is via echo command
Instead I need to send a file from my local machine to this remove ec2 machine via ssm agent, hence I did the following ,
Modified the "/etc/ssh/ssh_config" with proxy as below,
# SSH over Session Manager
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
Then In above code, I have tried to start a session with below code and that is also successfully .
response = ssm_client.start_session(Target='i-04843lr540028e96a')
Now I am not sure how to use this session response or use this aws ssm session and send a file
Environment description:
Source: pod running in an EKS cluster
dest: ec2 machine (which has ssm agent running)
file to be transferred: Important private key which will be used by some process in ec2 machine and it will be different for different machine's
Solution tried:
I can push the file to s3 in source and execute ssm boto3 libaray can pull from s3 and store in the remote ec2 machine
But I don't want to do the above due to the reason I don't want to store the private key i s3. So wanted to directly send the file from memory to the remote ec2 machine
Basically i wanted to achieve scp which is mentioned in this aws document : https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-ssh
If you have SSH over SSM setup, you can just use normal scp, like so:
scp file.txt ec2-user#i-04843lr540028e96a
If it isn't working, make sure you have:
Session Manager plugin installed locally
Your key pair on the instance and locally (you will need to define it in your ssh config, or via the -i switch)
SSM agent on the instance (installed by default on Amazon Linux 2)
An instance role attached to the instance that allows Session Manager (it needs to be there at boot, so if you just attached, reboot)
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
If you need more detail, give me more info on your setup, and I'll try and help.
Can we set Environment variable against an EC2 instance from AWS console?
Or do we need load any other service to achieve this?
Also can we load this variable insider Docker?
That appears to be out of scope for the AWS Console. EC2 however is basically AWS hosted VMs, So anything you could do normally on that OS can be done.
Just connect to the machine and do what you would normally do. (See the AWS CLI Guide for further help)
Using the AWS Console:
Stop the ec2 instance
For that ec2 instance, from top dropdown - select Actions > Instance Settings > View/Change User Data
Create a script that would add required environment variables to init scripts.
!/bin/sh echo 'export MY_ENV_VARIABLE=value' >> ~/.bashrc
or a script to /etc/init.d or other similar locations..
Start the EC2 instance for the Environment variables to be available.
If you want to pass the EC2 instance environment variable to the Docker container environment variable:
When you start a docker instance through "docker run" you can pass the value through --env param in the command line.
eg: docker run -e DOCKER_ENV_VARIABLE=$MY_ENV_VARIABLE ...
https://docs.docker.com/engine/reference/commandline/run/#options
You can use user_data to load and afterwards pass to your docker container using docker run -e CONTAINER_ENV_VAR=$EC2_ENV_VARor put in your Dockerfile.
for keys or any sensitive data I would advise you to use the Parameter Store, you can put passwords, users, or whatever data you want and use a service call chamber
I am using AWS code deploy agent and deploying my project to the server through bitbucket plugin.
The code deployment agent first executes the script files which has the command to execute my spring-boot project.
Since I have two environments one development and another production. I want the script to do things differently based on the environment i.e two different instances.
My plan is to fetch the aws static ip-address which is mapped and from that determine the environment
(production or stage).
How to fetch the elastic ip address through sh commands.
edited
Static IP will work.
Here is a more nature CodeDeploy way to solve this is - set up 2 CodeDeploy deployment groups, one for your development env and the other for your production env. Then in your script, you can use environment variables that CodeDeploy will set during the deployment to understand which env you are deploying to.
Here is a blog post about how to use CodeDeploy environment variables: https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/
You could do the following:
id=$( curl http://169.254.169.254/latest/meta-data/instance-id )
eip=$( aws ec2 describe-addresses --filter Name=instance-id,Values=${id} | aws ec2 describe-addresses | jq .Addresses[].PublicIp --raw-output )
The above gets the instance-id from metadata, then uses the aws cli to look for elastic IPs filtered by the id from metadata. Using jq this output can then be parsed down to the IP you are looking for.
Query the metadata server
eip=`curl -s 169.254.169.254/latest/meta-data/public-ipv4`
echo $eip
The solution is completely off tangent to what I originally asked but it was enough for my requirement.
I just needed to know the environment I am in to do certain actions. So what I did was to set an environment variable in an independent script file where the environment variable is set and the value is that of the environment.
ex: let's say in a file env-variables.sh
export profile= stage
In the script file where the commands have to be executed based on the environment I access it this way
source /test/env-variables.sh
echo current profile is $profile
if [ $profile = stage ]
then
echo stage
elif [ $profile = production ]
then
echo production
else
echo failure
fi
Hope some one finds it useful
As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation
Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root
I have a cluster "my-cluster"
If I try and add an ECS instance, there are non available. However, if I create a cluster "default", then I have an instance available.
I have deleted the file /var/lib/ecs/data/ecs_agent_data.json as suggested here:
Why can't my ECS service register available EC2 instances with my ELB?
Where can I change my instance/load balancer to allow me to use an EC2 instance in "my-cluster" rather than having to use the "default" cluster?
Per the ECS Agent Configuration docs:
If you are manually starting the Amazon ECS container agent (for non-Amazon ECS-optimized AMIs), you can use these environment variables in the docker run command that you use to start the agent with the syntax --env=VARIABLE_NAME=VARIABLE_VALUE. For sensitive information, such as authentication credentials for private repositories, you should store your agent environment variables in a file and pass them all at once with the --env-file path_to_env_file option.
One of the environment variables in the list is ECS_CLUSTER. So start the agent like this:
docker run -e ECS_CLUSTER=my-cluster ...
If you're using the ECS-optimized AMI you can use an alternative approach as well.