Mup setup/deploy error when using meteor-up - amazon-web-services

I want to deploy my website using meteor-up. I want to deploy it to a EC2 instance on AWS. I have created my mup.json file and configured it as follows:
{
// Server authentication info
"servers": [
{
"host": "ec2-52-24-95-147.us-west-2.compute.amazonaws.com",
"username": "ubuntu",
//"password": "password"
// or pem file (ssh based authentication)
"pem": "C:/Users/username/meteor.pem"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.35",
// Install PhantomJS in the server
"setupPhantom": false,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "Homepage",
// Location of app (local directory)
"app": "C:/website",
// Configure environment
"env": {
"ROOT_URL": "ec2-52-24-95-147.us-west-2.compute.amazonaws.com",
"PORT": 80,
"METEOR_ENV": "production"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 15
}
Unfortunately this is not working and I am getting the following error:
throw er; // Unhandled 'error' event
^
Error: connect ETIMEDOUT {public IP from AWS}:22
at Object.exports._errnoExcpetion (util.js:837:11)
at exports._excpetionWithHostPort (util.js:860:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1060:14)

Per the https://github.com/arunoda/meteor-up, try setting the DEBUG environment variable:
Verbose Output
If you need to see the output of meteor-up (to see more precisely where it's failing or hanging, for example), run it like so:
DEBUG=* mup <command>
where is one of the mup commands such as setup, deploy, etc.

Related

curl: (56) Recv failure: Connection reset by peer aws lambda typescript

I have a small app (just testing it out for now) written in typescript and I would like to deploy to lambda. I have followed the official tutorial for creating lambda container images in AWS official guide. I have only changed the location of the handler to src/index.ts
When I run curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' I get curl: (56) Recv failure: Connection reset by peer.
Dockerfile
FROM public.ecr.aws/lambda/nodejs:14
COPY . ${LAMBDA_TASK_ROOT}
# Install NPM dependencies for function
RUN npm install
RUN npm run build
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "dist/index.handler" ]
package.json
{
...
"scripts": {
"build": "tsc",
"start": "yarn run build && node dist/index.js",
"lint": "eslint . --ext .ts",
"test": "jest"
},
...
"dependencies": {
...
"typescript": "^4.5.4"
},
"devDependencies": {
"#types/node": "14",
"jest": "^27.4.5"
}
}
src/index.ts
export const handler = (event, context, callback) => {
console.log("It ran");
return callback(null, {
statusCode: 200,
message: "Hello",
body: "Hello"
})
}
tsconfig.json
{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist",
"declaration": true,
},
"lib": ["es2015"]
}
From my understanding AWS container images uses aws-lambda-runtime-interface-emulator. Visiting their github page theres literally nothing regarding debugging. No way to get the logs, no way to understand what is running or not.
From this answer it looks like that the app inside the container doesn't have a port assigned to it, but then again, no way to debug or see logs.
When I deploy the function to aws lambda and test it, it works as expected:
Question
How can I debug what is going on? Why am I receiving a curl: (56) Recv failure: Connection reset by peer?
The problem was that when building the image I mapped port 9000 to 8000 when the container application binds to 8080. It solved the issue by changing:
docker run -p 9000:8000 xxxx
to
docker run -p 9000:8080 xxxx
If after following the accepted answer and still getting the error. Just make sure you don't have another container running on port 9000.
Just run docker container ls and check if PORTS column does not contain 9000 for another container.
To fix it just choose another port that you know is not being used for example
docker run -p 9001:8080 xxxx and update your curl request accordingly

Connecting to Google Cloud PostgresSQL Server for local Strapi Application

I have a Google cloud Postgres server set up, and under the production environment I am able to connect to the server correctly. However, when I try to connect to the same cloud server in my local server, it doesn't seem to work. Here's the configuration for my database.js
module.exports = ({ env }) => {
return {
defaultConnection: "default",
connections: {
default: {
connector: "bookshelf",
settings: {
client: "postgres",
host: `/cloudsql/${env("INSTANCE_CONNECTION_NAME")}`,
database: env("DATABASE_NAME"),
username: env("DATABASE_USERNAME"),
password: env("DATABASE_PASSWORD"),
},
options: {},
},
},
};
};
I also have the app.yaml set up as normal. I have also created the .env file to store the relevant env information.
The error I am getting is
error Error: connect ENOENT /cloudsql/my-app-286221:us-central1:blm-resources/.s.PGSQL.5432
Does Strapi in local development support connection to cloud database? Or am I doing something wrong here.
This shouldn’t be a strapi issue. First you need to have an access from outside to google cloud postgres database. I’m not familiar with google cloud services, but from documentation there seem to be a couple of things to do to grant access to database.
More info from documentation:
https://cloud.google.com/sql/docs/postgres/connect-external-app#appaccessIP
Basically you grant access for connection from outside and then you add that connection information to your strapi config file.
I noticed your host: is not pointing to http:// or https:// but to some google server’s local address.

During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version

When I upload Dockerrun.aws.json to Elastic Beanstalk, I get the following errors in EB.
On events:
- During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
- Failed to deploy application.
- Unsuccessful command execution on instance id(s) 'i-XXXXXXXXXXXXXXX'. Aborting the operation.
On Health:
Overall:
- Command failed on all instances.
- Incorrect application version found on all instances. Expected version "Sample Application" (deployment 1).
i-XXXXXXXXXXXX:
- Application deployment failed at 2020-06-13T11:21:07Z with exit status 1 and error: Engine execution has encountered an error.
- Incorrect application version "43" (deployment 5). Expected version "Sample Application" (deployment 1).
On Logs:
Error response from daemon: You cannot remove a running container XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX. Stop the container before attempting removal or force remove
Error response from daemon: conflict: unable to delete XXXXXXXXXXXX (cannot be forced) - image is being used by running container XXXXXXXXXXXX
My Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "try-new-new",
"Key": ".docker/config.json"
},
"Image": {
"Name": "registry.gitlab.com/XXXXXXXXX/XXXXXXXXXX",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
]
}
My .docker/config.json file:
{
"auths" :
{
"https://registry.gitlab.com/" :
{
"auth" : "XXXXXXXXXXXXXXXXX",
"email" : "abc#abc.com"
}
}
}
I noticed that the problem is an Authentication problem. Unfortunately, I was unable to authorize AWS to pull the container from Registry.gitlab.com.
For now (though not a complete solution) I have created a solution by moving my container to the AWS Elastic Container Repository.

x509: ECDSA verification failure

I have to install a bna file on fabric . I am following the link https://hyperledger.github.io/composer/tutorials/deploy-to-fabric-single-org.html . However when I run the command: composer runtime install -c PeerAdmin#fabric-network -n tutorial-network
I am getting the error :
Error: Error trying install composer runtime. Error: No valid
responses from any peers.
Response from attempted peer comms was an error:
Error: Failed to deserialize creator identity, err The supplied
identity is not valid, Verify() returned x509: certificate signed by
unknown authority (possibly because of "x509: ECDSA verification
failure" while trying to verify candidate authority certificate
"ca.org1.example.com").
Any help on this please
sounds like you have made an error in following the tutorial (which definitely works). Are you sure the MSP id (Org1MSP) for the peer has been setup correctly ? Have you checked that the peer has successfully joined the channel (when the Fabric was started)? Have you done a docker ps to see your Fabric docker containers are running?) Assuming you followed the steps correct and using an identity you have obtained from a fabric-ca server (per the tutorial), have you checked the fabric-ca server is running correctly (docker logs ? Also, is it possible you're restarted your Fabric docker environment at one point and now your old key information is invalid?
eg
{
"name": "fabric-network",
"type": "hlfv1",
"mspID": "Org1MSP",
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"ca": {
"url": "http://localhost:7054",
"name": "ca.org1.example.com"
},
"orderers": [
{
"url" : "grpc://localhost:7050"
}
],
"channel": "composerchannel",
"timeout": 300
}
then re-create the card with the correct key/signcert info
composer card create -p connection.json -u PeerAdmin -c Admin#org1.example.com-cert.pem -k xxxxx_sk -r PeerAdmin -r ChannelAdmin
where the .pem file comes from signcerts directory and xxxxx is the generated value for the key filename in keystore.

Meteor UP : Verifying development failed

Im getting following error when trying to deploy my meteor application in AWS.
I have selected ubuntu as my operating system with 30 Gb of HDD. i have attached my mup.json file and snap of cmd response .
My mup.json is :
{
// Server authentication info
"servers": [
{
"host": "ec2-XXXXXXXXXXXXXXXXXXXXXXXXXXXXX.compute.amazonaws.com",
"username": "ubuntu",
//"password": "password"
// or pem file (ssh based authentication)
"pem": "D:/abcUbuntu.pem"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "Qxmedics",
// Location of app (local directory)
"app": "D:/abc/version1",
// Configure environment
"env": {
"PORT":80,
"ROOT_URL": "ec2-XXXXXXXXXXXXXXXXXXXXX.compute.amazonaws.com",
"MONGO_URL": "mongodb://localhost:27017/meteor"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 15
}
I tried alot... please help me out with this thanks.
For me the solution was making the wait time 45. If the rest is correct, it should confirm and deploy.
"deployCheckWaitTime": 45
It says there is a problem with your URL so, make sure it is a correct one with http or https.