AWS Amplify Duplicate Error: Duplicated files or mocks - amazon-web-services

I set up a new amplify, added auth, and a post confirmation lambda function to move user data into DynamoDB. When I run NPM start, I get this error:
Failed to construct transformer: DuplicateError: Duplicated files or mocks. Please check the console for more info
at setModule (C:\Users\cjfew\Desktop\Fresh\MyDemo\node_modules\jest-haste-map\build\index.js:543:17)
.js:426:22 {
mockPath1: 'amplify#current-cloud-backend\function\FreshAuthPostConfirmation\src\package.json',
mockPath2: 'amplify\backend\function\FreshAuthPostConfirmation\src\package.json'
}
'''
Based on what I have read, #current-cloud-backend gets created by amplify, based on the files in the backend folder. It seems like that package.json is supposed to be there, but I am not sure why it is an error. I saw somewhere that I should just delete the subclass duplicate file, which I assumed to be the one in #current-cloud-backend, but amplify is going to keep producing this error every time I push to it, how do I avoid this from happening at all?

There is a discussion about this error in this Amplify GitHub Issue. The file package.json appears twice to jest-haste-map, and the solution is to explicitly ignore the #current-cloud-backend folder when building and starting your app.
The solution to the problem depends on your version of React Native: here you find an overview of how exlusion of files work for different versions. For example, you can create a metro.config.js file with the following contents to exclude the #current-cloud-backend:
const exclusionList = require('metro-config/src/defaults/exclusionList');
module.exports = {
resolver: {
blacklistRE: exclusionList([/#current-cloud-backend\/.*/])
}
};
And install metro-config as a dev dependency. If that doesn't work there are some other solutions in the links that you can try out.

Related

Getting error, "Entity doesn't exist in AsyncLocal" when trying to call CreateBatchWrite<T> method of DynamoDBContext object

I have created a DynamoDb table in my dev machine and I'm trying to insert couple of rows from my .NET Core application using the CreateBatchWrite<T> method of DynamoDBContext object. I'm able to query the table from DynamoDB Javascript Shell window from the localhost:8000/shell url and it returns row count as 0. But when trying to call the CreateBatchWrite<T> method I get the error, "Entity doesn't exist in AsyncLocal".
Explanation
When using X-Ray, this happens when there is an attempt to create a SubSegment without a Parent Segment. Depending on your setup, when you run a query it might try creating a SubSegment, but it's failing because there is no parent segment.
This is common when running a Lambda function locally, as the Mock Lambda Test Tool will not create a Segment for you like the actual Lambda environment does on AWS. This can happen in other scenarios too.
More details here: https://github.com/aws/aws-xray-sdk-dotnet/issues/125
Solution
Easiest way to solve this is disabling X-Ray locally (as you probably don't want to generate traces locally):
In appsettings.Development.json add this:
"XRay": {
"DisableXRayTracing": "true",
"UseRuntimeErrors": "false",
"CollectSqlQueries": "false"
}
The important bit is the DisableXRayTracing equals true.
Make sure your appsettings.Development.json is set to Copy Always in the properties window. You can do this by including this in your .csproj:
<ItemGroup>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="appsettings.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
If you really want to trace things locally, then make sure you create
a parent segment only when running locally (on AWS this would cause
problems as you would have two parent segments, one created manually
by you, another one created by AWS).
Add this line before any DynamoDB API methods are executed:
AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
You can find more info in GitHub discussion https://github.com/aws/aws-xray-sdk-dotnet/issues/69#issuecomment-482688754
Also, you will need to import these 2 packages.
using Amazon.XRay.Recorder.Core;
using Amazon.XRay.Recorder.Core.Strategies;
If you are tracing requests made with the AWS SDK, the X-Ray SDK attempts to generate a subsegment automatically to represent those requests, such as CreateBatchWrite. However, a subsegment can only be created as the child of an existing Segment, so if you have not created a segment beforehand that Entity doesn't exist error will occur.
See these docs for how to create custom segments. Alternatively, if you are developing a web app, the X-Ray SDK can automatically create segments for requests made to your service by adding configuration described in these docs

Errors when using DialogFlow "restore agent" API

We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.

Expo - changing the project name without deleting the entire project in TestFlight

Is it possible to change the name of an Expo project without having to go through the entire process of building it again and submitting to the app store?
I accidentally didn't update the project name in the app.json file and now am stuck with an app called exmilti in TestFlight.
It took a few days to build, submit, and get the approval for TestFlight so I would love to avoid that process if there is a simple fix.
When I attempt to rebuild it in the CLI with the new name I get an error:
Reason: Unexpected response, raw:
{"responseId":"ed00c05f-82a0-41d6-9a7c-b48d04e68a1a","resultCode":35,"resultString":"There
were errors in the data supplied. Please correct and
re-submit.","userString":"Multiple profiles found with the name
'com.myComapanyName.AppName AppStore'. Please remove the duplicate profiles
and try again."
Which means to me that I am going to have to fully remove the app from TestFlight (yikes) and then re-upload the newly named App and wait for them to approve it again.
Any advice?
Update - I did not find an easier way so I just compiled the project and pushed it via expo again with the correct name.
It didn't take as long as I had expected.

Executing Alexa Tutorial Code ALWAYS Fails - Beginner

I'm new to Alexa Skill development and I'm sure this issue is process/environmental due to lack of experience.
Whenever I try to use a sample from an offical Alexa tutorial, I can never get the skill to pass the first TEST - always getting an error :(
In this case I am trying to run and fiddle with this tutorial:
https://developer.amazon.com/blogs/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill
What is happening / What I've done:
I download the Node SDK from the Git link, I also download the sample from the Git link. I then create a new ZIP that contains the sample code with the Node SDK included in the path /src/alexa-sdk/
I go to AWS and create a new function, not using a blueprint. I 'author from scratch' and create a function with the Skills Kit as a trigger. I name the function and use Node 6.10 runtime.
I upload my ZIP file and leave all boxes default, for Role I choose Custom Role then pick Basic Execution from the Role screen.
I leave the rest blank, go to NEXT and CREATE.
The function is created okay, but I do see this error 'This function contains external libraries. Uploading a new file will override these libraries.'
Here's the problem - this is the point of failure on all tutorials I've tried so far. I go to Configure Test Event, I choose ALEXA START SESSION as the template and click Save And Test...
EXECUTION RESULT FAILED:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"require (internal/module.js:20:19)"
]
}
Here's something from associated error logs, unsure if it's useful:
Unable to import module 'index': Error
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
I have noticed two things that I suspect may be an issue:
1) When I go to the CODE tab for this function, I see this message:
Your Lambda function "testprojectx" cannot be edited inline since the file name specified in the handler does not match a file name in your deployment package.
2) When I look at the code that's inserted into the test when I choose ALEXA SESSION START, I see many instances of 'unique value here':
amzn1.echo-api.session.[unique-value-here]
Although, there is no mention of this in the tutorial link I am referencing.
I'm really downhearted about it now as this is like the 3rd tutorial code I've tried to configure. Can anybody with experience follow the steps I've taken and point me in the right direction.
Thank you SO MUCH in advance if so.
EDIT: Absolute Clarification on how I am creating the ZIP file
I'm using Windows 10 and Chrome to download the files from GitHub.
I download the skill-sample-nodejs-decision-tree-master ZIP file from GitHub,
I do not know how to use NPM so I do this simply via downloading to desktop.
I then download the alexa-skills-kit-sdk-for-nodejs-master.ZIP file to desktop.
I unzip the contents of decision-tree-master into a folder on the desktop also called alexa-skills-kit-sdk-for-nodejs-master.
Within this folder, I navigate to /src/ and create a new folder called 'node_modules' within /src/.
Within /src/node_modules/ I now create another new folder called 'alexa-sdk'.
I unzip the contents of alexa-skills-kit-sdk-for-nodejs-master.zip into /src/node_modules/alexa-sdk/.
I have tried two approaches from here - both fail:
1) I ZIP only the contents of /src/ (not including the /src/ folder itself) and upload to Amazon.
2) I ZIP the entire 'decision-tree-master' folder and upload to Amazon.
I must be missing something, as I said this is just one of many Alexa tutorials I've tried to get working and this always happens :( So disheartened now.
This is common issue I have seen in many posts. Most of the cases it is the way zipping the files making the problem. Instead of zipping the folder you have to select all files and zip it like below,

Unable to upload to S3 with Grails S3 Demo Application

I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.