I have a quick question about usage of AWS SNS.
I have deployed an EC2 (t2.micro, Linux) instance in us-west-1 (N.California). I have written a python script using boto3 to send a simple text message to my phone. Later I discovered, there is no SNS service for instances deployed out of us-east-1 (N.Virginia). Till this point it made sense, because I see this below error when i execute my python script, as the region is defined as "us-west-1" in aws configure (AWS cli) and also in my python script.
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: PhoneNumber Reason:
But to test, when I changed the "region" in aws conifgure and in my python script to "us-east-1", my script pushed a text message to my phone. Isn't it weird? Can anyone please explain why this is working just by changing region in AWS cli and in my python script, though my instance is still in us-west-1 and I dont see "Publish text message" option on SNS dashboard on N.california region?
Is redefining the aws cli with us-east-1 similar to deploying a new instance altogether in us-east-1? I dont think so. Correct me if I am wrong. Or is it like having an instance in us-west-1, but just using SNS service from us-east-1? Please shed some light.
Here is my python script, if anyone need to look at it (Its a simple snippet).
import boto3
def send_message():
# Create an SNS client
client = boto3.client("sns", aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="us-east-1")
# Send your sms message.
client.publish(PhoneNumber="XXXX",Message="Hello World!")
if __name__ == '__main__':
send_message()
Is redefining the aws cli with us-east-1 similar to deploying a new
instance altogether in us-east-1?
No, it isn't like that at all.
Or is it like having an instance in us-west-1, but just using SNS
service from us-east-1?
Yes, that's all you are doing. You can connect to any AWS regions' API from anywhere on the Internet. It doesn't matter that it is running on an EC2 instance in a specific region, it only matters what region you tell the SDK/CLI to use.
You could run the same code on your local computer. Obviously your local computer is not running on AWS so you would have to tell the code which AWS region to send the API calls to. What you are doing is the same thing.
Code running on an EC2 server is not limited into using the AWS API in the same region that the EC2 server is in.
Did you try creating a topic before publishing to it? You should try create a topic and then publish to that topic.
Related
I have an ec2 instance with a instance profile attached to it. This instance profile has permissions to publish messages to a sns topic. When I remote into the ec2 instance and issue a command like
aws sns publish --topic-arn topic_arn --message hello
This works.
Now I'm trying to do the same with a simple curl command and this is what I use after remoting into the ec2 instance
curl "https://sns.us-west-2.amazonaws.com/?Message=hello&Action=Publish&TargetArn=topic_arn"
I get the following error
<Code>MissingAuthenticationToken</Code>
<Message>Request is missing Authentication Token</Message>
I was hoping that the curl would attach the instance profile details when sending the request (like when using the aws cli) but it does not seem to do so. Does anyone know how I can overcome this ?
When you do:
curl "https://sns.us-west-2.amazonaws.com/?Message=hello&Action=Publish&TargetArn=topic_arn"
you are directly making a request to the SNS endpoint. Which means that you have to sign your request with AWS credentials from your instance profile. If you don't want to use AWS CLI or any AWS SDK for accessing the SNS, you have to program the entire signature procedure yourself as described in the docs.
That's why, when you are using AWS CLI
aws sns publish --topic-arn topic_arn --message hello
everything works, because the AWS CLI makes a signed request to the SNS endpoint on your behalf.
My Python application is deployed in a docker container on an EC2 instance. Passwords are stored in secrets manager. During runtime, application will make an API call to secrets manager to fetch the password and connect. Since we recreated the instance, it started giving out below error -
botocore.exceptions.NoCredentialsError: Unable to locate credentials
My application code is -
session = boto3.session.Session()
client = session.client(service_name = 'secretmanager', region_name = 'us-east-1')
get_secret_value_response = client.get_secret_value(secretId = secret_name)
If I run -
aws secretmanager get-secret-value --secret-id abc
It works without any issues since IAM policy is appropriately attached to the EC2 instance.
I spent the last 2 days trying to troubleshoot this but am still stuck with no clarity on why this is breaking. Any tips or guidance would help.
The problem was with the HTTPToken variable in the instance metadata options which was defaulted to required with the fresh update. Reverted it back to optional and boto3 is now able to make an API call for instance meta data and inherit its roles.
I am unable to call aws services from fargate tasks - secrets manager and sns.
I want these services to be invoked from inside the docker image which is hosted on ECR. When I run the pipeline everything loads and run correctly except when the script inside the docker container is invoked, it throws an error. The script makes a call to either secrets manager or sns. The error thrown is -
Unable to locate credentials. You can configure credentials by running "aws configure".
If I do aws configure then the error goes away and every things works smoothly. But I do not want to store the aws credentials anywhere.
When I open task definitions I can see two roles - pipeline-task and ecsTaskExecutionRole
Although, I have given full administrator rights to both of these roles, the pipeline still throws error. Is there any place missing where I can assign roles/policies etc. I want to completely avoid using aws configure.
If the script with the issue is not a PID 1 process ( used to stop and start the container ), it will not automatically read the Task Role (pipeline-task-role). From your description, this sounds like the case.
Add this to your Dockerfile:
RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profile
The AWS SDK from the script should know where to pick up the credentials from after that.
I don't know if my problem was the same as yours but I also experienced this kind of problem where I had been set the task role but the container don't get the right permissions. After spending a few days, I discovered that if you set any of the AWS environment variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
or AWS_DEFAULT_REGION
into the task definition, the "Credential provider chain" will stop at the "Static credentials" step, so the SDK your script is using will look for the other credentials within the .aws/.credentials file and as it can't find them it throws Unable to locate credentials.
If you want to know further about the Credential provider chain you could read about it in https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html
I am using boto3 to control my EC2 instances on AWS from a python environment, using ec2 and ssm services. I have created an IAM account, that has access to AmazonSSMFullAccess and AmazonEC2FullAccess policies.
ec2 = boto3.client(
'ec2',
region_name='eu-west-1',
aws_access_key_id='…',
aws_secret_access_key='…/…+…'
)
ssm = boto3.client(
'ssm',
region_name='eu-west-1',
aws_access_key_id='…',
aws_secret_access_key='…/…+…'
)
I ran:
ec2.describe_instances()['Reservations']
Witch returned a list of all my instances.
But when I run:
ssm.describe_instance_information()
I get an empty list, though I have at least one instance running on AWS Linux AMI (ami-ca0135b3), and six others on recent Ubuntu AMIs. They are all in eu-west-1 (Ireland).
They should have SSM Agent preinstalled : (https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html)
I sshed into the AWS Linux instance, and tried to get the logs for ssm using:
sudo tail -f /var/log/amazon/ssm/amazon-ssm-agent.log
But nothing happens there when I run my python code. A sequence of messages gets displayed from time to time :
HealthCheck reporting agent health.
error when calling AWS APIs. error details - NoCredentialProviders: no valid providers in chain. Deprecated.
I also tried running a command through the web interface, selected ' AWS-RunRemoteScript' but no instance is shown below.
My goal is to run:
ssm.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': [command]},
InstanceIds=[instance_id],
)
But it gives me the following error, probably due to the previous problem.
botocore.errorfactory.InvalidInstanceId: An error occurred (InvalidInstanceId) when calling the SendCommand operation
The agent is pre-installed, but the instance (not just your IAM user) still needs the proper role to communicate with the systems manager. Particularly this step of Configuring Access to Systems Manager.
By default, Systems Manager doesn't have permission to perform actions
on your instances. You must grant access by using an IAM instance
profile. An instance profile is a container that passes IAM role
information to an Amazon EC2 instance at launch.
You should review the whole configuration guide and make sure you have configured all required roles appropriately.
If I have a bash script sitting in an EC2 instance, is there a way that lambda could trigger it?
The trigger for lambda would be coming from RDS. So a table in mysql gets updated and a specific column in that table gets updated to "Ready", Lambda would have to pull the ID of that row with a "Ready" status and send that ID to the bash script.
Let's assume some things. First, you know how to set up a "trigger" using sns (see here) and how to hang a lambda script off of said trigger. Secondly, you know a little about python (Lambda's syntax offerings are Node, Java, and Python) because this example will be in Python. Additionally, I will not cover how to query a database with mysql. You did not mention whether your RDS instance was MySQL, Postgress, or otherwise. Lastly, you need to understand how to allow permission across AWS resources with IAM roles and policies.
The following script will at least outline the method of firing a script to your instance (you'll have to figure out how to query for relevant information or pass that information into the SNS topic), and then run the shell command on an instance you specify.
import boto3
def lambda_handler(event, context):
#query RDS to get ID or get from SNS topic
id = *queryresult*
command = 'sh /path/to/scriptoninstance' + id
ssm = boto3.client('ssm')
ssmresponse = ssm.send_command(InstanceIds=['i-instanceid'], DocumentName='AWS-RunShellScript', Parameters= { 'commands': [command] } )
I would probably have two flags for the RDS row. One that says 'ready' and one that says 'identified'. So SNS topic triggers lambda script, lambda script looks for rows with 'ready' = true and 'identified' = false, change 'identified' to true (to make sure other lambda scripts that could be running at the same time aren't going to pick it up), then fire script. If script doesn't run successfully, change 'identified' back to false to make sure your data stays valid.
Using Amazon EC2 Simple Systems Manager, you can configure an SSM document to run a script on an instance, and pass that script a parameter. The Lambda instance would need to run the SSM send-command, targeting the instance by its instance id.
Sample SSM document:
run_my_example.json:
{
"schemaVersion": "1.2",
"description": "Run shell script to launch.",
"parameters": {
"taskId":{
"type":"String",
"default":"",
"description":"(Required) the Id of the task to run",
"maxChars":16
}
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["run_my_example.sh"]
}
]
}
}
}
The above SSM document accepts taskId as a parameter.
Save this document as a JSON file, and call create-document using the AWS CLI:
aws ssm create-document --content file:///tmp/run_my_example.json --name "run_my_example"
You can review the description of the SSM document by calling describe-document:
aws ssm describe-document --name "run_my_example"
You can specify the taskId parameter and run the command by using the document name with the send-command
aws ssm send-command --instance-ids i-12345678 --document-name "run_my_example" --parameters --taskid=123456
NOTES
Instances must be running the latest version of the SSM agent.
You will need to have some logic in the Lambda script to identify the instance ids of the server EG look up the instance id of a specifically tagged instance.
I think you could use the new EC2 Run Command feature to accomplish this.
There are few things to consider one of them is:
Security. As of today lambda can't run in VPC. Which means your EC2 has to have a wide open inbound security group.
I would suggest take a look at the messaging queue (say SQS ). This would solve a lot of headache.
That's how it might work:
Lambda. Get message; send to SQS
EC2. Cron job that gets trigger N number of minutes. pull message from sqs; process message.