"checksum must be specified in PUT API, when the resource already exists" - amazon-web-services

I am getting the following error while building using AWS Lex?
"checksum must be specified in PUT API, when the resource already exists"
Can someone tell what it means and how to fix it?

I was getting the same error when building my bot in the console. I found the answer here.
Refresh the page and then set the version of the bot to Latest.

The documentation states that you have to provide the checksum of a bot that already exists if you are trying to update it: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/LexModelBuildingService.html#putBot-property
"checksum — (String)
Identifies a specific revision of the $LATEST version.
When you create a new bot, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.
When you want to update a bot, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don't specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception."
That's the aws-sdk for JavaScript docs, but the same concept applies to any SDK as well as the AWS CLI.
This requires calling get-bot first, which will return the checksum of the bot among other data. Save that checksum somewhere and pass it in the params when you call put-bot
I would recommend using the tutorials here: https://docs.aws.amazon.com/lex/latest/dg/gs-console.html
That tutorial demonstrates using the AWS CLI, but the same concepts can be abstracted to use any SDK you desire.

Had the same problem.
I guess once you have published one bot, you can not anymore modify or build it.
Create another bot.

Related

Recover EFS with aws start-restore-job in OneZone

I didn't find the AvailabilityZoneName parameter in the startRestoreJob SDK
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Backup.html#startRestoreJob-property
For this reason, when I restore the snapshot, it is created as REGIONAL.
The AWS console itself allows you to select this when you restore. Does anyone know a solution?
I was confronted with the same problem. the documentation seems not aligned. i check on cloudtrail but i have a HIDDEN_DUR_TO_SECURITY_REASONS placeholder...
But in Developper mode on chrome you can see metadata attribute sent to the server. so you need to use availabilityZoneName and singleAzFilesystem parameters.
You can pass the file system type information in the startRestoreJob API in the Metadata property.
To the the values allowed, you can call the GetRecoveryPointRestoreMetadata API to get the Metadata value for your Recovery Point, and then use the values you get to pass to the StartRestoreJob API.
Docs for the GetRecoveryPointRestoreMetadata API: https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRecoveryPointRestoreMetadata.html

Errors when using DialogFlow "restore agent" API

We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.

AWS API creation delay?

I'm currently using aws-amplify for react native. I'm using the API command to do HTTP requests and I was able to call a GET method API that I created few months ago. The thing is when I call a function that's freshly made, it keeps telling me that the "Api 'API name' doesn't exist". I tried to see if there was any typo, but nothing. I also created a GET method API with the exact same code as the one that was working but same result (api doesn't exist). Has anyone also run into this issue?
Can you post your sample code including the object you're passing to Amplify.configure(object) and also the GET call which includes the name (API.get(name, path, options)()? You'll need to ensure that the name is in the endpoints[] array of your configuration object per the documentation and that this matches the name passed in your method call. For more information see this page: https://github.com/aws/aws-amplify/blob/master/media/api_guide.md

wso2am 2.1.0 API export - No enum constant APIDTO.TypeEnum.NULL

exporting an API definition through a REST service service I got a following exception:
ERROR - GlobalThrowableMapper An Unknown exception has been captured by global exception mapper.
java.lang.IllegalArgumentException: No enum constant org.wso2.carbon.apimgt.rest.api.publisher.dto.APIDTO.TypeEnum.NULL
at java.lang.Enum.valueOf(Enum.java:238)
at org.wso2.carbon.apimgt.rest.api.publisher.dto.APIDTO$TypeEnum.valueOf(APIDTO.java:63)
at org.wso2.carbon.apimgt.rest.api.publisher.utils.mappings.APIMappingUtil.fromAPItoDTO(APIMappingUtil.java:239)
at org.wso2.carbon.apimgt.rest.api.publisher.impl.ApisApiServiceImpl.apisApiIdGet(ApisApiServiceImpl.java:380)
at org.wso2.carbon.apimgt.rest.api.publisher.ApisApi.apisApiIdGet(ApisApi.java:229)
If I import an API through the REST APIM API, I can GET / export it. As soon I update the set of resources manually in the publisher (I delete a resource and add another one), this exception occurs.
Thank you all for any hint
It's bug in API manager, it has resolved in issue https://wso2.org/jira/browse/APIMANAGER-5759. The pull request has just merged recently, till now, till now there's no official build including this. However you can manually patch it in your system.
Apparently some things which seems to be optional need to be defined to make the API exportable, such as parameter description. Specifying a parameter description helped in this case

AWS | Boto3 | RDS |function DownloadDBLogFilePortion |cannot download a log file because it contains binary data |

When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.