Expo - changing the project name without deleting the entire project in TestFlight - expo

Is it possible to change the name of an Expo project without having to go through the entire process of building it again and submitting to the app store?
I accidentally didn't update the project name in the app.json file and now am stuck with an app called exmilti in TestFlight.
It took a few days to build, submit, and get the approval for TestFlight so I would love to avoid that process if there is a simple fix.
When I attempt to rebuild it in the CLI with the new name I get an error:
Reason: Unexpected response, raw:
{"responseId":"ed00c05f-82a0-41d6-9a7c-b48d04e68a1a","resultCode":35,"resultString":"There
were errors in the data supplied. Please correct and
re-submit.","userString":"Multiple profiles found with the name
'com.myComapanyName.AppName AppStore'. Please remove the duplicate profiles
and try again."
Which means to me that I am going to have to fully remove the app from TestFlight (yikes) and then re-upload the newly named App and wait for them to approve it again.
Any advice?

Update - I did not find an easier way so I just compiled the project and pushed it via expo again with the correct name.
It didn't take as long as I had expected.

Related

AWS Amplify Duplicate Error: Duplicated files or mocks

I set up a new amplify, added auth, and a post confirmation lambda function to move user data into DynamoDB. When I run NPM start, I get this error:
Failed to construct transformer: DuplicateError: Duplicated files or mocks. Please check the console for more info
at setModule (C:\Users\cjfew\Desktop\Fresh\MyDemo\node_modules\jest-haste-map\build\index.js:543:17)
.js:426:22 {
mockPath1: 'amplify#current-cloud-backend\function\FreshAuthPostConfirmation\src\package.json',
mockPath2: 'amplify\backend\function\FreshAuthPostConfirmation\src\package.json'
}
'''
Based on what I have read, #current-cloud-backend gets created by amplify, based on the files in the backend folder. It seems like that package.json is supposed to be there, but I am not sure why it is an error. I saw somewhere that I should just delete the subclass duplicate file, which I assumed to be the one in #current-cloud-backend, but amplify is going to keep producing this error every time I push to it, how do I avoid this from happening at all?
There is a discussion about this error in this Amplify GitHub Issue. The file package.json appears twice to jest-haste-map, and the solution is to explicitly ignore the #current-cloud-backend folder when building and starting your app.
The solution to the problem depends on your version of React Native: here you find an overview of how exlusion of files work for different versions. For example, you can create a metro.config.js file with the following contents to exclude the #current-cloud-backend:
const exclusionList = require('metro-config/src/defaults/exclusionList');
module.exports = {
resolver: {
blacklistRE: exclusionList([/#current-cloud-backend\/.*/])
}
};
And install metro-config as a dev dependency. If that doesn't work there are some other solutions in the links that you can try out.

Errors when using DialogFlow "restore agent" API

We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.

Issue with uploading GeoLite2-City.mmdb.missing file in mautic

I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)

Executing Alexa Tutorial Code ALWAYS Fails - Beginner

I'm new to Alexa Skill development and I'm sure this issue is process/environmental due to lack of experience.
Whenever I try to use a sample from an offical Alexa tutorial, I can never get the skill to pass the first TEST - always getting an error :(
In this case I am trying to run and fiddle with this tutorial:
https://developer.amazon.com/blogs/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill
What is happening / What I've done:
I download the Node SDK from the Git link, I also download the sample from the Git link. I then create a new ZIP that contains the sample code with the Node SDK included in the path /src/alexa-sdk/
I go to AWS and create a new function, not using a blueprint. I 'author from scratch' and create a function with the Skills Kit as a trigger. I name the function and use Node 6.10 runtime.
I upload my ZIP file and leave all boxes default, for Role I choose Custom Role then pick Basic Execution from the Role screen.
I leave the rest blank, go to NEXT and CREATE.
The function is created okay, but I do see this error 'This function contains external libraries. Uploading a new file will override these libraries.'
Here's the problem - this is the point of failure on all tutorials I've tried so far. I go to Configure Test Event, I choose ALEXA START SESSION as the template and click Save And Test...
EXECUTION RESULT FAILED:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"require (internal/module.js:20:19)"
]
}
Here's something from associated error logs, unsure if it's useful:
Unable to import module 'index': Error
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
I have noticed two things that I suspect may be an issue:
1) When I go to the CODE tab for this function, I see this message:
Your Lambda function "testprojectx" cannot be edited inline since the file name specified in the handler does not match a file name in your deployment package.
2) When I look at the code that's inserted into the test when I choose ALEXA SESSION START, I see many instances of 'unique value here':
amzn1.echo-api.session.[unique-value-here]
Although, there is no mention of this in the tutorial link I am referencing.
I'm really downhearted about it now as this is like the 3rd tutorial code I've tried to configure. Can anybody with experience follow the steps I've taken and point me in the right direction.
Thank you SO MUCH in advance if so.
EDIT: Absolute Clarification on how I am creating the ZIP file
I'm using Windows 10 and Chrome to download the files from GitHub.
I download the skill-sample-nodejs-decision-tree-master ZIP file from GitHub,
I do not know how to use NPM so I do this simply via downloading to desktop.
I then download the alexa-skills-kit-sdk-for-nodejs-master.ZIP file to desktop.
I unzip the contents of decision-tree-master into a folder on the desktop also called alexa-skills-kit-sdk-for-nodejs-master.
Within this folder, I navigate to /src/ and create a new folder called 'node_modules' within /src/.
Within /src/node_modules/ I now create another new folder called 'alexa-sdk'.
I unzip the contents of alexa-skills-kit-sdk-for-nodejs-master.zip into /src/node_modules/alexa-sdk/.
I have tried two approaches from here - both fail:
1) I ZIP only the contents of /src/ (not including the /src/ folder itself) and upload to Amazon.
2) I ZIP the entire 'decision-tree-master' folder and upload to Amazon.
I must be missing something, as I said this is just one of many Alexa tutorials I've tried to get working and this always happens :( So disheartened now.
This is common issue I have seen in many posts. Most of the cases it is the way zipping the files making the problem. Instead of zipping the folder you have to select all files and zip it like below,

Can not run 'rake paperclip:refresh:thumbnails CLASS=Spree::Image' in rails spree app console getting No Such Key

I am trying to RAILS_ENV=production run rake paperclip:refresh:thumbnails CLASS=Spree::Image
on my remote server in my current rails app directory, so I can refresh the spree images that I have uploaded in the past.
I am using S3, my bucket is setup correctly as I can see each of my product's images in individual ID folders in my AWS S3 bucket.
But each time I run the above command I get a 'No Such Key' Error when the rake is aborted.
This command runs locally and works fine. (obviously without the RAILS_ENV=production locally)
Ok so I wrote this question to answer it myself. I hope the question makes sense.
For clarity, I had this issue because it was old images (old non existing paths that were associated with an old S3 Key) that I had uploaded with another S3 Key in previous testing on the same rails app. I did this earlier while trying to get S3 to work with my Rails Spree Application.
What I did to solve this was go into my Rails console on my remote server with this command:
$RAILS_ENV=production rails c
I then ordered the list of all Spree:Images them with this:
$y Spree::Image.all(:order => 'attachment_updated_at')
The 'y' is a nice little yaml way of displaying the information of the Spree:Image that's a little more human.
Next I looked at the ID of each Image and noticed that there was a good amount of them with IDs that did not match folders in my AWS S3 bucket.
In my Case the lowest ID number that was in fact a folder in my S3 bucket was '1078' so I ran this:
$Spree::Image.where('id < ?', 1078).destroy_all
This deleted any Spree::Image that had an ID of 1077 or less.
Finally, I closed rails console and ran this on my remote server inside my current rails app directory. (In my case is was /home/deployer/apps/potentialapp/current/)
$RAILS_ENV=production rake paperclip:refresh:thumbnails CLASS=Spree::Image
This reformatted my uploaded images on Spree and everything is now working great.
Hope this saves someone a great big headache. (Oh and empty your cache when you go to test and see if the images have in fact reloaded, almost cried at 4 am last night.)
I solved the same problem using the console and skipping errors (old/broken S3 assets):
Spree::Image.all.each { |i| i.attachment.reprocess! rescue nil }