Error (InvalidChangeBatch) in adding multiple DNS Records from aws command line - amazon-web-services

I am trying to add multiple DNS records using this script add_multipleDNSrecord.sh and i am getting this error
A client error (InvalidChangeBatch) occurred when calling the ChangeResourceRecordSets operation: FATAL problem: UnsupportedCharacter (Value contains unsupported characters) encountered with ' '
But i am able to add single record without any issue from aws cli. can anyone please tell me what went wrong in this script?
#!/bin/bash
# declare STRING variable
STRING="Hello World"
#print variable on a screen
echo $STRING
# Hosted Zone ID
ZONEID="Z24*************"
#Comment
COMMENT="Add new entry to the zone"
# The Time-To-Live of this recordset
TTL=300
# Type
TYPE="A"
# Input File Name
FILENAME=/home/ec2-user/awscli/route53/scripts/test.json
cat >> $FILENAME << EOF
{
"Comment":"$COMMENT",
"Changes":[
{
"Action":"CREATE",
"ResourceRecordSet":{
"ResourceRecords":[
{
"Value":"$IP"
}
],
"Name":"$RECORDSET",
"Type":"$TYPE",
"TTL":$TTL
}
}
]
}
EOF
echo $FILENAME

After Replacing the space and using dot instead of space solves the problem.
Now,The script works fine and its able to add multiple records to the hosted zone.

Related

Error parsing parameter '--change-batch': Invalid JSON + Route 53 DNS recordset update

When i try to execute below code, i am getting this error. Am i missing anything in the json format ?
```
Error parsing parameter '--change-batch': Invalid JSON: Expecting value: line 7 column 14 (char 160)
JSON received: {
"Comment":"To update private dns name recordset",
"Changes":[
{
"Action":"UPSERT",
"ResourceRecordSet":{
"Name":devdnstest.test.com.,
"Type":,
"TTL":300,
"ResourceRecords":[
{
"Value":"10.0.0.2"
}
]
}
}
]
}
#!/bin/bash
#Variable Declaration
HOSTED_ZONE_ID="Z2UERB3PG6W9Y"
NAME="devdnstest.test.com."
TYPE="CNAME"
TTL=300
#get current recordset IP from Route 53
aws route53 list-resource-record-sets --hosted-zone-id $HOSTED_ZONE_ID | \
jq -r '.ResourceRecordSets[] | select (.Name == "'"$NAME"'") | select (.Type == "'"$TYPE"'") | .ResourceRecords[0].Value' > /tmp/current_route53_value
cat /tmp/current_route53_value
#prepare route 53 file
file=/home/route53_changes.json
cat << EOF > $file
{
"Comment":"To update private dns name recordset",
"Changes":[
{
"Action":"UPSERT",
"ResourceRecordSet":{
"Name":$NAME,
"Type":$CNAME,
"TTL":$TTL,
"ResourceRecords":[
{
"Value":"10.0.0.2"
}
]
}
}
]
}
EOF
#update records
aws route53 change-resource-record-sets --hosted-zone-id $HOSTED_ZONE_ID --change-batch file://$file
In this code i am trying to update a private hosted zone record set from 10.0.0.1 to 10.0.0.2.
It's complaining about the "Type":$CNAME. But not sure what else to keep in there.
Appreciate your inputs.
Thanks,
The data you're sending is not JSON - in particular,
"Name":devdnstest.test.com.,
"Type":,
There may be other tiny errors in there, but this will get you started.
In your original shell script, you'll need to add quotes and fix $NAME to $TYPE, something along the lines of
"Name":"$NAME",
"Type":"$TYPE",
For other people coming across this issue, you'll get a (slightly different) "Invalid JSON" message for syntactically correct JSON that doesn't conform to what AWS Route53 is expecting also.

Environment Variables in newest AWS EC2 instance

I am trying to get ENVIRONMENT Variables into the EC2 instance (trying to run a django app on Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type ami-0ff8a91507f77f867 ). How do you get them in the newest version of amazon's linux, or get the logging so it can be traced.
user-data text (modified from here):
#!/bin/bash
#trying to get a file made
touch /tmp/testfile.txt
cat 'This and that' > /tmp/testfile.txt
#trying to log
echo 'Woot!' > /home/ec2-user/user-script-output.txt
#Trying to get the output logged to see what is going wrong
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
#trying to log
echo "XXXXXXXXXX STARTING USER DATA SCRIPT XXXXXXXXXXXXXX"
#trying to store the ENVIRONMENT VARIABLES
PARAMETER_PATH='/'
REGION='us-east-1'
# Functions
AWS="/usr/local/bin/aws"
get_parameter_store_tags() {
echo $($AWS ssm get-parameters-by-path --with-decryption --path ${PARAMETER_PATH} --region ${REGION})
}
params_to_env () {
params=$1
# If .Ta1gs does not exist we assume ssm Parameteres object.
SELECTOR="Name"
for key in $(echo $params | /usr/bin/jq -r ".[][].${SELECTOR}"); do
value=$(echo $params | /usr/bin/jq -r ".[][] | select(.${SELECTOR}==\"$key\") | .Value")
key=$(echo "${key##*/}" | /usr/bin/tr ':' '_' | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
export $key="$value"
echo "$key=$value"
done
}
# Get TAGS
if [ -z "$PARAMETER_PATH" ]
then
echo "Please provide a parameter store path. -p option"
exit 1
fi
TAGS=$(get_parameter_store_tags ${PARAMETER_PATH} ${REGION})
echo "Tags fetched via ssm from ${PARAMETER_PATH} ${REGION}"
echo "Adding new variables..."
params_to_env "$TAGS"
Notes -
What i think i know but am unsure
the user-data script is only loaded when it is created, not when I stop and then start mentioned here (although it also says [i think outdated] that the output is logged to /var/log/cloud-init-output.log )
I may not be starting the instance correctly
I don't know where to store the bash script so that it can be executed
What I have verified
the user-data text is on the instance by ssh-ing in and curl http://169.254.169.254/latest/user-data shows the current text (#!/bin/bash …)
What Ive tried
editing rc.local directly to export AWS_ACCESS_KEY_ID='JEFEJEFEJEFEJEFE' … and the like
putting them in the AWS Parameter Store (and can see them via the correct call, I just can't trace getting them into the EC2 instance without logs or confirming if the user-data is getting run)
putting ENV variables in Tags and importing them as mentioned here:
tried outputting the logs to other files as suggested here (Not seeing any log files in the ssh instance or on the system log)
viewing the System Log on the aws webpage to see any errors/logs via selecting the instance -> 'Actions' -> 'Instance Settings' -> 'Get System Log' (not seeing any commands run or log statements [only 1 unrelated word of user])

How to paginate over an AWS CLI response?

I'm trying to paginate over EC2 Reserved Instance offerings, but can't seem to paginate via the CLI (see below).
% aws ec2 describe-reserved-instances-offerings --max-results 20
{
"NextToken": "someToken",
"ReservedInstancesOfferings": [
{
...
}
]
}
% aws ec2 describe-reserved-instances-offerings --max-results 20 --starting-token someToken
Parameter validation failed:
Unknown parameter in input: "PaginationConfig", must be one of: DryRun, ReservedInstancesOfferingIds, InstanceType, AvailabilityZone, ProductDescription, Filters, InstanceTenancy, OfferingType, NextToken, MaxResults, IncludeMarketplace, MinDuration, MaxDuration, MaxInstanceCount
The documentation found in [1] says to use start-token. How am I supposed to do this?
[1] http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-reserved-instances-offerings.html
With deference to a 2017 solution by marjamis which must have worked on a prior CLI version, please see a working approach for paginating from AWS in bash from a Mac laptop and aws-cli/2.1.2
# The scope of this example requires that credentials are already available or
# are passed in with the AWS CLI command.
# The parsing example uses jq, available from https://stedolan.github.io/jq/
# The below command is the one being executed and should be adapted appropriately.
# Note that the max items may need adjusting depending on how many results are returned.
aws_command="aws emr list-instances --max-items 333 --cluster-id $active_cluster"
unset NEXT_TOKEN
function parse_output() {
if [ ! -z "$cli_output" ]; then
# The output parsing below also needs to be adapted as needed.
echo $cli_output | jq -r '.Instances[] | "\(.Ec2InstanceId)"' >> listOfinstances.txt
NEXT_TOKEN=$(echo $cli_output | jq -r ".NextToken")
fi
}
# The command is run and output parsed in the below statements.
cli_output=$($aws_command)
parse_output
# The below while loop runs until either the command errors due to throttling or
# comes back with a pagination token. In the case of being throttled / throwing
# an error, it sleeps for three seconds and then tries again.
while [ "$NEXT_TOKEN" != "null" ]; do
if [ "$NEXT_TOKEN" == "null" ] || [ -z "$NEXT_TOKEN" ] ; then
echo "now running: $aws_command "
sleep 3
cli_output=$($aws_command)
parse_output
else
echo "now paginating: $aws_command --starting-token $NEXT_TOKEN"
sleep 3
cli_output=$($aws_command --starting-token $NEXT_TOKEN)
parse_output
fi
done #pagination loop
Looks like some busted documentation.
If you run the following, this works:
aws ec2 describe-reserved-instances-offerings --max-results 20 --next-token someToken
Translating the error message, it said it expected NextToken which can be represented as next-token on the CLI.
If you continue to read the reference documentation that you provided, you will learn that:
--starting-token (string)
A token to specify where to start paginating. This is the NextToken from a previously truncated response.
Moreover:
--max-items (integer)
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

AWS SDK php S3 refuses to access bucket name xx.my_domain.com

I want to use AWS S3 to store image files for my website. I create a bucket name images.mydomain.com which was referred by dns cname images.mydomain.com from AWS Route 53.
I want to check whether a folder or file exists; if not I will create one.
The following php codes work fine for regular bucket name using stream wrapper but fails for this type of bucket name such as xxxx.mydomain.com. This kind of bucket name fails in doesObjectExist() method too.
// $new_dir = "s3://aaaa/akak3/kk1/yy3/ww4" ; // this line works !
$new_dir = "s3://images.mydomain.com/us000000/10000" ; // this line fails !
if( !file_exists( $new_dir) ){
if( !mkdir( $new_dir , 0777 , true ) ) {
echo "create new dir $new_dir failed ! <br>" ;
} else {
echo "SUCCEED in creating new dir $new_dir <br" ;
}
} else {
echo "dir $new_dir already exists. Skip creating dir ! <br>" ;
}
I got the following message
Warning: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: "images.mydomain.com.s3.amazonaws.com". in C:\AppServ\www\ecity\vendor\aws\aws-sdk-php\src\Aws\S3\StreamWrapper.php on line 737
What is the problem here?
Any advise on what to do for this case?
Thanks!

"YYYYMMDD": Invalid identifier error while trying through SQOOP

Please help me out from the below error.It works fine when checked in oracle but fails when trying through SQOOP import.
version : Hadoop 0.20.2-cdh3u4 and Sqoop 1.3.0-cdh3u5
sqoop import $SQOOP_CONNECTION_STRING
--query 'SELECT st.reference,u.unit,st.reading,st.code,st.read_id,st.avg FROM reading st,tunit `tu,unit u
WHERE st.reference=tu.reference and st.number IN ('218730','123456') and tu.unit_id = u.unit_id
and u.enrolled='Y' AND st.reading <= latest_off and st.reading >= To_Date('20120701','yyyymmdd')
and st.type_id is null and $CONDITIONS'
--split-by u.unit
--target-dir /sample/input
Error:
12/10/10 09:33:21 ERROR manager.SqlManager: Error executing statement:
java.sql.SQLSyntaxErrorException: ORA-00904: "YYYYMMDD": invalid identifier
followed by....
12/10/10 09:33:21 ERROR sqoop.Sqoop: Got exception running Sqoop:
java.lang.NullPointerException
Thanks & Regards,
Tamil
I believe that the problem is actually on Bash side (or your command line interpret). Your query contains for example following fragment u.enrolled='Y'. Please notice that you're escaping character constants with single quotes. You seem to be putting entire query into additional single quotes: --query 'YOUR QUERY'. Which results in something like --query '...u.enrolled='Y'...'. However such string is stripped by bash to '...u.enrolled=Y...'. You can verify that by using "echo" to see what exactly will bash do with your string before it will be passed to Sqoop.
jarcec#jarcec-thinkpad ~ % echo '...u.enrolled='Y'...'
...u.enrolled=Y..
.
I would recommend to either escape all single quotes (\') inside your query or choose double quotes for entire query. Please note that the later option will require escaping $ characters with backslash (\$).