I am working on AWS system manager.I am using this to push my software to the instances using distributor which is under system manager.
The distributor creates one package.The package will have my installation script,uninstallation script and the .exe file which I am gonna push.This whole package will save as a SSM document which is maintained by system manager.
My problem if suppose I have two AWS accounts:
A account - The account where I created a package(us-east-1)
B account - The Instance which is running.(ap-southeast-2
I want to push the package in 'A' to account 'B' Instance.
I have to do this with my python code.So I used boto3.
def runcommand(self,instanceid):
try:
response = self.ssmclient.send_command(
InstanceIds=[instanceid,],
DocumentName='AWS-ConfigureAWSPackage',
TimeoutSeconds=600,
Parameters={
'action': ['Install'],
'installationType':['Uninstall and reinstall'],
'name':['arn:aws:ssm:us-east-1:accnumber:document/SSMDistributorPackage']
},
OutputS3Region='ap-southeast-2',
OutputS3BucketName='output',
OutputS3KeyPrefix='abcd',
)
print("Successfully pushed the agent....")
except Exception as e:
print("The error while running command:::::",str(e))
print("response(send_command)::::",response)
But it throws error like Invalid Document:cross region arn is not supporting.
How can I solve this?
Is there anyway to make this package supported by all aws accounts?
SSM Documents can be shared across accounts.
However, this error is not a cross account error. Its that you are referencing an SSM Document from one region to another region.
As the client and instance is in ap-southeast-2 but the document is in us-east-1 you will need to create the document in the ap-southeast-2 region.
Related
My Python application is deployed in a docker container on an EC2 instance. Passwords are stored in secrets manager. During runtime, application will make an API call to secrets manager to fetch the password and connect. Since we recreated the instance, it started giving out below error -
botocore.exceptions.NoCredentialsError: Unable to locate credentials
My application code is -
session = boto3.session.Session()
client = session.client(service_name = 'secretmanager', region_name = 'us-east-1')
get_secret_value_response = client.get_secret_value(secretId = secret_name)
If I run -
aws secretmanager get-secret-value --secret-id abc
It works without any issues since IAM policy is appropriately attached to the EC2 instance.
I spent the last 2 days trying to troubleshoot this but am still stuck with no clarity on why this is breaking. Any tips or guidance would help.
The problem was with the HTTPToken variable in the instance metadata options which was defaulted to required with the fresh update. Reverted it back to optional and boto3 is now able to make an API call for instance meta data and inherit its roles.
I'm trying to use the aws-sdk-go in my application. It's running on EC2 instance. Now in the Configuring Credentials of the doc,https://docs.aws.amazon.com/sdk-for-go/api/, it says it will look in
*Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles.
* Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development.
*EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production.`
Wouldn't the best order be the reverse order? But my main question is do I need to ask the instance if it has a role and then use that to set up the credentials if it has a role? This is where I'm not sure of what I need to do and how.
I did try a simple test of creating a empty config with essentially only setting the region and running it on the instance with the role and it seems to have "worked" but in this case, I am not sure if I need to explicitly set the role or not.
awsSDK.Config{
Region: awsSDK.String(a.region),
MaxRetries: awsSDK.Int(maxRetries),
HTTPClient: http.DefaultClient,
}
I just want to confirm is this the proper way of doing it or not. My thinking is I need to do something like the following
role = use sdk call to get role on machine
set awsSDK.Config { Credentials: credentials form of role,
...
}
issue service command with returned client.
Any more docs/pointers would be great!
I have never used the go SDK, but the AWS SDKs I used automatically use the EC2 instance role if credentials are not found from any other source.
Here's an AWS blog post explaining the approach AWS SDKs follow when fetching credentials: https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/. In particular, see this:
If you use code like this, the SDKs look for the credentials in this
order:
In environment variables. (Not the .NET SDK, as noted earlier.)
In the central credentials file (~/.aws/credentials or
%USERPROFILE%.awscredentials).
In an existing default, SDK-specific
configuration file, if one exists. This would be the case if you had
been using the SDK before these changes were made.
For the .NET SDK, in the SDK Store, if it exists.
If the code is running on an EC2
instance, via an IAM role for Amazon EC2. In that case, the code gets
temporary security credentials from the instance metadata service; the
credentials have the permissions derived from the role that is
associated with the instance.
In my apps, when I need to connect to AWS resources, I tend to use an access key and secret key that have specific predefined IAM roles. Assuming I have those two, the code I use to create a session is:
awsCredentials := credentials.NewStaticCredentials(awsAccessKeyID, awsSecretAccessKey, "")
awsSession = session.Must(session.NewSession(&aws.Config{
Credentials: awsCredentials,
Region: aws.String(awsRegion),
}))
When I use this, the two keys are usually specified as either environment variables (if I deploy to a docker container).
A complete example: https://github.com/retgits/flogo-components/blob/master/activity/amazons3/activity.go
I am using boto3 to control my EC2 instances on AWS from a python environment, using ec2 and ssm services. I have created an IAM account, that has access to AmazonSSMFullAccess and AmazonEC2FullAccess policies.
ec2 = boto3.client(
'ec2',
region_name='eu-west-1',
aws_access_key_id='…',
aws_secret_access_key='…/…+…'
)
ssm = boto3.client(
'ssm',
region_name='eu-west-1',
aws_access_key_id='…',
aws_secret_access_key='…/…+…'
)
I ran:
ec2.describe_instances()['Reservations']
Witch returned a list of all my instances.
But when I run:
ssm.describe_instance_information()
I get an empty list, though I have at least one instance running on AWS Linux AMI (ami-ca0135b3), and six others on recent Ubuntu AMIs. They are all in eu-west-1 (Ireland).
They should have SSM Agent preinstalled : (https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html)
I sshed into the AWS Linux instance, and tried to get the logs for ssm using:
sudo tail -f /var/log/amazon/ssm/amazon-ssm-agent.log
But nothing happens there when I run my python code. A sequence of messages gets displayed from time to time :
HealthCheck reporting agent health.
error when calling AWS APIs. error details - NoCredentialProviders: no valid providers in chain. Deprecated.
I also tried running a command through the web interface, selected ' AWS-RunRemoteScript' but no instance is shown below.
My goal is to run:
ssm.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': [command]},
InstanceIds=[instance_id],
)
But it gives me the following error, probably due to the previous problem.
botocore.errorfactory.InvalidInstanceId: An error occurred (InvalidInstanceId) when calling the SendCommand operation
The agent is pre-installed, but the instance (not just your IAM user) still needs the proper role to communicate with the systems manager. Particularly this step of Configuring Access to Systems Manager.
By default, Systems Manager doesn't have permission to perform actions
on your instances. You must grant access by using an IAM instance
profile. An instance profile is a container that passes IAM role
information to an Amazon EC2 instance at launch.
You should review the whole configuration guide and make sure you have configured all required roles appropriately.
i have created an aws account, launched ec2 instance and created buckets in s3. Also i have installed python, boto3 and aws cli. But i'm stuck on connecting python with aws step.
The first and foremost thing that you need to check is whether your EC2 instance has permissions to access the S3 bucket. This can be done in 2 ways:
Store the credentials in the EC2 instance (insecure)
Assign IAM roles to the EC2 instance that has S3 read and write permissions (secure)
In order to assign a role to your instance, follow this guide.
Once your permissions are set up, you can either use the AWS CLI or BOTO3 to access S3 from your EC2 instance.
1: If you are asking how to establish a connection for running your AWS-python codes then you must follow these steps on terminal:
aws configure (this ill ask you credentials which you will find in the .CSV file that is created initially)
Provide the credentials and try to run the code
For ex:
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
2: If your question is how you will use the boto3 API calls to run AWS functions then this might help you:If you are using boto3 SDK then you can make use of low-level clients and higher-level.
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
You can follo this link for more detailed info : http://boto3.readthedocs.io/en/latest/reference/services/ec2.html
I have a quick question about usage of AWS SNS.
I have deployed an EC2 (t2.micro, Linux) instance in us-west-1 (N.California). I have written a python script using boto3 to send a simple text message to my phone. Later I discovered, there is no SNS service for instances deployed out of us-east-1 (N.Virginia). Till this point it made sense, because I see this below error when i execute my python script, as the region is defined as "us-west-1" in aws configure (AWS cli) and also in my python script.
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: PhoneNumber Reason:
But to test, when I changed the "region" in aws conifgure and in my python script to "us-east-1", my script pushed a text message to my phone. Isn't it weird? Can anyone please explain why this is working just by changing region in AWS cli and in my python script, though my instance is still in us-west-1 and I dont see "Publish text message" option on SNS dashboard on N.california region?
Is redefining the aws cli with us-east-1 similar to deploying a new instance altogether in us-east-1? I dont think so. Correct me if I am wrong. Or is it like having an instance in us-west-1, but just using SNS service from us-east-1? Please shed some light.
Here is my python script, if anyone need to look at it (Its a simple snippet).
import boto3
def send_message():
# Create an SNS client
client = boto3.client("sns", aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="us-east-1")
# Send your sms message.
client.publish(PhoneNumber="XXXX",Message="Hello World!")
if __name__ == '__main__':
send_message()
Is redefining the aws cli with us-east-1 similar to deploying a new
instance altogether in us-east-1?
No, it isn't like that at all.
Or is it like having an instance in us-west-1, but just using SNS
service from us-east-1?
Yes, that's all you are doing. You can connect to any AWS regions' API from anywhere on the Internet. It doesn't matter that it is running on an EC2 instance in a specific region, it only matters what region you tell the SDK/CLI to use.
You could run the same code on your local computer. Obviously your local computer is not running on AWS so you would have to tell the code which AWS region to send the API calls to. What you are doing is the same thing.
Code running on an EC2 server is not limited into using the AWS API in the same region that the EC2 server is in.
Did you try creating a topic before publishing to it? You should try create a topic and then publish to that topic.