admin-create-user command doesn't work properly - amazon-web-services

I'm trying to run admin-create-user cli command as shown in the official doc, but it doesn't seems to run properly.
I don't get all attributes created event though they were in the command. I always get only the last attribute typed in the command.
am I doing something wrong? is there any solution?
aws cognito-idp admin-create-user --user-pool-id us-west-2_aaaaaaaaa --username diego#example.com --user-attributes=Name=email,Value=kermit2#somewhere.com,Name=phone_number,Value="+15555551212" --message-action SUPPRESS
and I'm getting
{
"User": {
"Username": "diego#example.com",
"Enabled": true,
"UserStatus": "FORCE_CHANGE_PASSWORD",
"UserCreateDate": 1566470568.864,
"UserLastModifiedDate": 1566470568.864,
"Attributes": [
{
"Name": "sub",
"Value": "5dac8ce5-2997-4185-b862-86cf15aede77"
},
{
"Name": "phone_number",
"Value": "+15555551212"
}
]
}
}
instead of
{
"User": {
"Username": "7325c1de-b05b-4f84-b321-9adc6e61f4a2",
"Enabled": true,
"UserStatus": "FORCE_CHANGE_PASSWORD",
"UserCreateDate": 1548099495.428,
"UserLastModifiedDate": 1548099495.428,
"Attributes": [
{
"Name": "sub",
"Value": "7325c1de-b05b-4f84-b321-9adc6e61f4a2"
},
{
"Name": "phone_number",
"Value": "+15555551212"
},
{
"Name": "email",
"Value": "diego#example.com"
}
]
}
}

The shorthand notation that you're using, as referenced in the docs here, does indeed seem to be producing the results you are receiving.
A quick way around this issue is to change to using JSON format for the user-attributes option. If you modify the user-attributes option to use JSON, your command will look like this:
aws cognito-idp admin-create-user --user-pool-id us-west-2_aaaaaaaaa --username a567 --user-attributes '[{"Name": "email","Value": "kermit2#somewhere.com"},{"Name": "phone_number","Value": "+15555551212"}]' --message-action SUPPRESS
Which, when executed, produces this output:
{
"User": {
"Username": "a567",
"Enabled": true,
"UserStatus": "FORCE_CHANGE_PASSWORD",
"UserCreateDate": 1566489693.408,
"UserLastModifiedDate": 1566489693.408,
"Attributes": [
{
"Name": "sub",
"Value": "f6ff3e05-5f15-4a53-a45f-52e939b941fd"
},
{
"Name": "phone_number",
"Value": "+15555551212"
},
{
"Name": "email",
"Value": "kermit2#somewhere.com"
}
]
}
}

Related

Change DisplayName and ID of objects of an S3 bucket by AWS CLI

I have a bucket that has objects with two possible owners (kafka and dataware).
When I run a get-object-acl of some object, it comes like this.
{
"Owner": {
"DisplayName": "dataware",
"ID": "123456abc"
},
"Grants": [
{
"Grantee": {
"DisplayName": "dataware",
"ID": "123456abc",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
}
Is it possible, through AWS CLI, to manually change the display name to kafka and its ID (or the opposite)? Something like this?
{
"Owner": {
"DisplayName": "kafka",
"ID": "987654aaa"
},
"Grants": [
{
"Grantee": {
"DisplayName": "kafka",
"ID": "987654aaa",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
}
Any suggestion is appreciated.

AWS CLI Describe Target Groups

haii, i have the awscli result describing the target group as json format
{
"TargetHealthDescriptions": [
{
"Target": {
"Id": "1.1.1.1",
"Port": 123,
"AvailabilityZone": "ap-south-1"
},
"HealthCheckPort": "123",
"TargetHealth": {
"State": "healthy"
}
},
{
"Target": {
"Id": "2.2.2.2",
"Port": 123,
"AvailabilityZone": "ap-south-1"
},
"HealthCheckPort": "123",
"TargetHealth": {
"State": "healthy"
}
}
]
}
Im trying to make an awscli script to get a result like this
[
{
"Id": "1.1.1.1",
"Port": 123,
"Health": null
},
{
"Id": "2.2.2.2",
"Port": 123,
"Health": null
}
]
I've tried several query methods but I have problems getting a null value for health, its any errors in the query ?
ex query
--query 'TargetHealthDescriptions[*].Target.{Id:Id, Port:Port, Health:TargetHealth.{state:State}}' --output json
Try the below:
--query 'TargetHealthDescriptions[*].{Id:Target.Id,Port:Target.Port,Health:TargetHealth.State}'

Unable to query CloudWatch Log AWS API endpoint

I am attempting to build a small web application for our internal team to use to view our CloudWatch logs. Right now I'm very early in development and simply trying to access the logs via Postman using https://logs.us-east-1.amazonaws.com as specified in the AWS API official documentation.
I have followed the steps to set up my POST request to the endpoint with the following headers:
Postman Generated Headers
Also, in following with the documentation I have provided the Action in the body of this post request:
{"Action": "DescribeLogGroups"}
Using the AWS CLI this works fine and I can see all my logs groups.
When I send this request to https://logs.us-east-1.amazonaws.com I get back:
{
"Output": {
"__type": "com.amazon.coral.service#UnknownOperationException",
"message": null
},
"Version": "1.0"
}
The status code is 200.
Things I have tried:
Removing the body of the request altogether -> results in "internal server error"
appending /describeloggroups to the URL with no body -> results in "internal server error"
I'm truly not sure what I'm doing wrong here.
Best way is to set the X-Amz-Target header to Logs_20140328.DescribeLogGroups.
Here is an example request: https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeLogGroups.html#API_DescribeLogGroups_Example_1_Request
Below is a Postman collection you can try. Save it as a file and import into Postman with File -> Import. It also requires you to set credential and region variables in postman.
{
"info": {
"name": "CloudWatch Logs",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "DescribeLogs",
"request": {
"auth": {
"type": "awsv4",
"awsv4": [
{
"key": "sessionToken",
"value": "{{SESSION_TOKEN}}",
"type": "string"
},
{
"key": "service",
"value": "logs",
"type": "string"
},
{
"key": "region",
"value": "{{REGION}}",
"type": "string"
},
{
"key": "secretKey",
"value": "{{SECRET_ACCESS_KEY}}",
"type": "string"
},
{
"key": "accessKey",
"value": "{{ACCESS_KEY_ID}}",
"type": "string"
}
]
},
"method": "POST",
"header": [
{
"warning": "This is a duplicate header and will be overridden by the Content-Type header generated by Postman.",
"key": "Content-Type",
"type": "text",
"value": "application/json"
},
{
"key": "X-Amz-Target",
"type": "text",
"value": "Logs_20140328.DescribeLogGroups"
},
{
"warning": "This is a duplicate header and will be overridden by the host header generated by Postman.",
"key": "host",
"type": "text",
"value": "logs.{{REGION}}.amazonaws.com"
},
{
"key": "Accept",
"type": "text",
"value": "application/json"
},
{
"key": "Content-Encoding",
"type": "text",
"value": "amz-1.0"
}
],
"body": {
"mode": "raw",
"raw": "{}"
},
"url": {
"raw": "https://logs.{{REGION}}.amazonaws.com",
"protocol": "https",
"host": [
"logs",
"{{REGION}}",
"amazonaws",
"com"
]
}
},
"response": []
}
],
"protocolProfileBehavior": {}
}
Try copying this into a json file and import it in Postman and add the missing variables.
I tried to get a DescribeLogGroups in the service "logs". Look in the docs here
https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeLogGroups.html#API_DescribeLogGroups_Example_1_Request
for more information about the headers and body.
PS: Session token is optional, I didn't need it in my case
Hope it works for anyone who
{
"info": {
"_postman_id": "8660f3fc-fc6b-4a71-84ba-739d8b4ea7c2",
"name": "CloudWatch Logs",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "DescribeLogs",
"request": {
"auth": {
"type": "awsv4",
"awsv4": [
{
"key": "service",
"value": "{{AWS_SERVICE_NAME}}",
"type": "string"
},
{
"key": "region",
"value": "{{AWS_REGION}}",
"type": "string"
},
{
"key": "secretKey",
"value": "{{AWS_SECRET_ACCESS_KEY}}",
"type": "string"
},
{
"key": "accessKey",
"value": "{{AWS_ACCESS_KEY_ID}}",
"type": "string"
},
{
"key": "sessionToken",
"value": "",
"type": "string"
}
]
},
"method": "POST",
"header": [
{
"key": "X-Amz-Target",
"value": "Logs_20140328.DescribeLogGroups",
"type": "text"
},
{
"key": "Content-Encoding",
"value": "amz-1.0",
"type": "text"
}
],
"body": {
"mode": "raw",
"raw": "{}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "https://{{AWS_SERVICE_NAME}}.{{AWS_REGION}}.amazonaws.com",
"protocol": "https",
"host": [
"{{AWS_SERVICE_NAME}}",
"{{AWS_REGION}}",
"amazonaws",
"com"
]
}
},
"response": []
}
]
}

AWS Data Pipeline stuck on Waiting For Runner

My goal is to copy a table in a postgreSQL database running on AWS RDS to a .csv file on Amazone S3. For this I use AWS data pipeline and found the following tutorial however when I follow all steps my pipeline is stuck at: "WAITING FOR RUNNER" see screenshot. The AWS documentation states:
ensure that you set a valid value for either the runsOn or workerGroup
fields for those tasks
however the field "runs on" is set. Any idea why this pipeline is stuck?
and my definition file:
{
"objects": [
{
"output": {
"ref": "DataNodeId_Z8iDO"
},
"input": {
"ref": "DataNodeId_hEUzs"
},
"name": "DefaultCopyActivity01",
"runsOn": {
"ref": "ResourceId_oR8hY"
},
"id": "CopyActivityId_8zaDw",
"type": "CopyActivity"
},
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"name": "DefaultResource1",
"id": "ResourceId_oR8hY",
"type": "Ec2Resource",
"terminateAfter": "1 Hour"
},
{
"*password": "xxxxxxxxx",
"name": "DefaultDatabase1",
"id": "DatabaseId_BWxRr",
"type": "RdsDatabase",
"region": "eu-central-1",
"rdsInstanceId": "aqueduct30v05.cgpnumwmfcqc.eu-central-1.rds.amazonaws.com",
"username": "xxxx"
},
{
"name": "DefaultDataFormat1",
"id": "DataFormatId_wORsu",
"type": "CSV"
},
{
"database": {
"ref": "DatabaseId_BWxRr"
},
"name": "DefaultDataNode2",
"id": "DataNodeId_hEUzs",
"type": "SqlDataNode",
"table": "y2018m07d12_rh_ws_categorization_label_postgis_v01_v04",
"selectQuery": "SELECT * FROM y2018m07d12_rh_ws_categorization_label_postgis_v01_v04 LIMIT 100"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://rutgerhofste-data-pipeline/logs",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"dataFormat": {
"ref": "DataFormatId_wORsu"
},
"filePath": "s3://rutgerhofste-data-pipeline/test",
"name": "DefaultDataNode1",
"id": "DataNodeId_Z8iDO",
"type": "S3DataNode"
}
],
"parameters": []
}
Usually "WAITING FOR RUNNER" state implies that it is waiting for a resource (such as an EMR cluster). You seem to have not set 'workGroup' field. It means that you have specified "What" to do, but have not specified "who" should do it.

How to upgrade Data Pipeline definition from EMR 3.x to 4.x/5.x?

I would like to upgrade my AWS data pipeline definition to EMR 4.x or 5.x, so I can take advantage of Hive's latest features (version 2.0+), such as CURRENT_DATE and CURRENT_TIMESTAMP, etc.
The change from EMR 3.x to 4.x/5.x requires the use of releaseLabel in EmrCluster, versus amiVersion.
When I use a "releaseLabel": "emr-4.1.0", I get the following error: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
Below is my data pipeline definition, for EMR 3.x. It works well, so I hope others find this useful (including the answer for emr 4.x/5.x), as the common answer/recommendation to importing data into DynamoDB from a file is to use Data Pipeline, but literally no one has put forward a solid & simple working example (say for custom data format).
{
"objects": [
{
"type": "DynamoDBDataNode",
"id": "DynamoDBDataNode1",
"name": "OutputDynamoDBTable",
"dataFormat": {
"ref": "DynamoDBDataFormat1"
},
"region": "us-east-1",
"tableName": "testImport"
},
{
"type": "Custom",
"id": "Custom1",
"name": "InputCustomFormat",
"column": [
"firstName", "lastName"
],
"columnSeparator" : "|",
"recordSeparator" : "\n"
},
{
"type": "S3DataNode",
"id": "S3DataNode1",
"name": "InputS3Data",
"directoryPath": "s3://data.domain.com",
"dataFormat": {
"ref": "Custom1"
}
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ondemand",
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://logs.data.domain.com"
},
{
"type": "HiveActivity",
"id": "HiveActivity1",
"name": "S3ToDynamoDBImportActivity",
"output": {
"ref": "DynamoDBDataNode1"
},
"input": {
"ref": "S3DataNode1"
},
"hiveScript": "INSERT OVERWRITE TABLE ${output1} SELECT reflect('java.util.UUID', 'randomUUID') as uuid, TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP())) as loadDate, firstName, lastName FROM ${input1};",
"runsOn": {
"ref": "EmrCluster1"
}
},
{
"type": "EmrCluster",
"name": "EmrClusterForImport",
"id": "EmrCluster1",
"coreInstanceType": "m1.medium",
"coreInstanceCount": "1",
"masterInstanceType": "m1.medium",
"amiVersion": "3.11.0",
"region": "us-east-1",
"terminateAfter": "1 Hours"
},
{
"type": "DynamoDBDataFormat",
"id": "DynamoDBDataFormat1",
"name": "OutputDynamoDBDataFormat",
"column": [
"uuid", "loadDate", "firstName", "lastName"
]
}
],
"parameters": []
}
A sample file could look like
John|Doe
Jane|Doe
Carl|Doe
Bonus: rather than setting CURRENT_DATE in a column, how I can set as a variable in the hiveScript section? I tried SET loadDate = CURRENT_DATE;\n\n INSERT OVERWRITE..." to no avail. Not shown in my example are other dynamic fields I would like to set before the query clause.