I just followed these instructions (Link) to get AWS CloudWatch installed on my EC2 instance.
I updated my repositories: sudo yum update -y
I installed the awslogs package: sudo yum install -y awslogs
I edited the /etc/awslogs/awscli.conf, confirming that my AZ is us-west-2b on the EC2 page
I left the default condiguration of the /etc/awslogs/awslogs.conf file as is, confirming that the default path indeed has logs being written to it
I checked the /var/log/awslogs.log file and it is repeatedly showing the error:
EndpointConnectionError: Could not connect to the endpoint URL: "https://logs.us-west-2b.amazonaws.com/"
I do not see any newly created log group and log stream in the CloudWatch console as expected. What am I missing here?
Should I be pointing at some other endpoint other than https://logs.us-west-2b.amazonaws.com/ ? If so, where is that configured?
Thanks in advance,
Graham
The awscli.conf expects the region and not the AZ.
Specify the region as us-west-2.
Here is the documentation from the reference page
Edit the /etc/awslogs/awscli.conf file and in the [default] section, specify the region where you want to view log data and add your credentials.
region = us-east-1
aws_access_key_id = <YOUR ACCESS KEY>
aws_secret_access_key = <YOUR SECRET KEY>
The error
EndpointConnectionError: Could not connect to the endpoint URL: "https://logs.us-west-2b.amazonaws.com/"
could be attributed to wrong specification of region.
The correct endpoint for the cloudwatch logs service in US-WEST-2 is
logs.us-west-2.amazonaws.com.
Please refer to the following documentation for aws service endpoints
http://docs.aws.amazon.com/general/latest/gr/rande.html#cwl_region
Related
I have successfully installed cloudwatch agent in amazon linux instance and configured the awslogs.conf file as below.But unfortunately the loggroup is created in us-east-1 instead of configured region us-east-2.Any idea what mistake i'm doing?
Please check your AWS profile region. This must be because of the current default region is selected as us-east-2. Try run aws configure command and change your region to the desired one.
I recently spun up a t2.micro image and I want to install neo4j on it. I started with the instructions at https://neo4j.com/developer/neo4j-cloud-aws-ec2-ami/. But I got to the step for creating a security group and I received an error that a region needed to be supplied. Here is the command I used:
aws ec2 create-security-group \
--group-name $GROUP \
--description "Neo4j security group"
The error message was
You must specify a region. You can also configure your region by running "aws configure".
When I run this command I get prompted by a lot of stuff that don't seem related to region? Not only am I prompted for values that I don't know where/how to get them, when I am prompted for the region I am not sure the format to enter the region. So my question is how to I configure a security group so I can move on to installing neo4j on this instance?
There are still several steps to follow to install neo4j, but I seem to be tripped up on this step.
The commands expect a default region under ~/.aws/config
[default]
region=us-west-2
output=json
On the link that you have shared, there is a step to "Configure the AWS CLI with Your Credentials". This step allows you setup aws profile(s) and as part of those profiles, you can set a region.
Follow this link to understand how you can setup your aws profile with credentials and region details
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
Hope it helps
configured AWS Cli on Linux system.
While running any command like "aws ec2 describe-instances" it is showing error "Invalid IPv6 URL"
Ran into the same error.
Running this command fixed the error for me:
export AWS_DEFAULT_REGION=us-east-1
You might also try specifying the region when running any command:
aws s3 ls --region us-east-1
Hope this helps!
or run aws configure and enter valid region for default region name
I ran into this issue due to region being wrongly typed. When you run aws configure during initial setup, if you try to delete a mistaken entry, it will end up having invalid characters in the region name.
Hopefully, running aws configure again will resolve your issue.
What I'm trying to do is to monitor log file through CloudWatch logs agent.
I have installed CloudWatch to my EC2 Linux Instance (EC2 Instance has Instance profile and IAM Role that are connected).
The installation was successful, but when I'm using sudo service awslogs status
I'm having this status massage dead but pid file exists.
In my error log file ( /var/log/awslogs.log) I have only this line that repeats over and over again - 'AccessKeyId'.
How can I fix Cloud Watch logs agent and make it to work?
This means that your AWS Logs agent requires your AWS Access Key/Secret. This can be provided in /etc/awslogs/awscli.conf in following format:
[plugins]
cwlogs = cwlogs
[default]
region = YOUR_INSTANCE_REGION (e.g. us-east-1)
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
Restart the service after making this change:
sudo service awslogs restart
Hope this helps!!!
This is an odd one for sure. I have an aws command line user that I've setup with admin privileges in the AWS account. The credentials I generated for the user work when I issue an aws ec2 command. But not when I run an aws iam command.
When I run the aws iam command, this is what I get:
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
An error occurred (InvalidClientTokenId) when calling the CreateAccountAlias operation: The security token included in the request is invalid.
However when I run an aws ec2 subcommand using the same credentials, I get a success:
[root#web1:~] #aws ec2 describe-instances --profile=mcollective
RESERVATIONS 281933881942 r-0080cb499a0299557
INSTANCES 0 x86_64 146923690580740912 False xen ami-6d1c2007 i-00dcdb6cbff0d7980 t2.micro mcollective 2016-07-27T23:56:50.000Z ip-xxx-xx-xx-xx.ec2.internal xx.xx.xx.xx ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xxx.xx.xx /dev/sda1 ebs True subnet-0e734056 hvm vpc-909103f7
BLOCKDEVICEMAPPINGS /dev/sda1
EBS 2016-07-23T01:26:42.000Z False attached vol-0eb52f6a94c5833aa
MONITORING disabled
NETWORKINTERFACES 0e:68:20:c5:fa:23 eni-f78223ec 281933881942 ip-xxx-xx-xx-xx.ec2.internal xxx.xx.xx.xx True in-use subnet-0e734056 vpc-909103f7
ASSOCIATION 281933881942 ec2-xxx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
ATTACHMENT 2016-07-23T01:26:41.000Z eni-attach-cbf11a1f True 0 attached
GROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
PRIVATEIPADDRESSES True ip-xxx-xx-xx-xxx.ec2.internal xxx.xx.xx.xx
ASSOCIATION 281933881942 ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
PLACEMENT us-east-1a default
PRODUCTCODES aw0evgkw8e5c1q413zgy5pjce marketplace
SECURITYGROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
STATE 16 running
TAGS Name mcollective
You have mail in /var/spool/mail/root
[root#web1:~] #
So why the heck are the same credentials working for one set of aws subcommands, but not another? I'm really curious about this one!
This question was answered by #garnaat, but the answer was buried in the comments of the question, so quoting the answer here for those who glaze over it:
Different profiles are being used in each command through use of --profile flag
Explanation:
In the first command
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
--profile is not specified, so the default aws profile credentials are automatically used
In the second command
aws ec2 describe-instances --profile=mcollective
--profile is specified, which overrides the default profile