strapi admin not running in aws - amazon-web-services

I just finished my project in Strapi and deployed in AWS, when I run my Public ipv4 :1337 says: 'server is running successully' but when I want to log in admin panel just spinning and not showing panel.
server.js
module.exports = ({ env }) => ({
host: env('HOST', '0.0.0.0'),
port: env.int('PORT', 1337),
cron: { enabled: true},
url: env('URL', 'http://localhost'),
admin: {
auth: {
secret: env('ADMIN_JWT_SECRET', 'MY_JWT_SECRET'),
},
},
});

Related

Cube.js production mode error: ECONNREFUSED 127.0.0.1:6379

I am trying to make cube.js production mode in Docker container work.
But I am getting
Error: connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1159:16)
I have my redis:
docker run -e ALLOW_EMPTY_PASSWORD=yes --name my-redis -p 6379:6379 -d redis
I have my cubestore:
docker run -p 3030:3030 cubejs/cubestore
my cube.js:
module.exports = {
jwt: {
key: 'key',
},
contextToAppId: ({ securityContext }) =>
`CUBEJS_APP_${securityContext.username}`,
scheduledRefreshContexts: async () => [
{
securityContext: {
username: 'public'
},
},
{
securityContext: {
username: 'obondar',
},
},
],
};
.env
CUBEJS_DB_HOST=server
CUBEJS_DB_PORT=5432
CUBEJS_DB_NAME=db
CUBEJS_DB_USER=name
CUBEJS_DB_PASS=password
CUBEJS_DB_TYPE=postgres
CUBEJS_SCHEDULED_REFRESH_DEFAULT=true
CUBEJS_API_SECRET=key
CUBEJS_CUBESTORE_HOST=localhost
What am I missing?
Can someone help, please?

Connection refused when connecting to redis on EC2 instance

I am trying to connect to local redis database on EC2 instance from a lambda function. However when I try to execute the code, I get the following error in the logs
{
"errorType": "Error",
"errorMessage": "Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
"code": "ECONNREFUSED",
"stack": [
"Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
" at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)"
],
"errno": "ECONNREFUSED",
"syscall": "connect",
"address": "127.0.0.1",
"port": 6379
}
The security group has the following entries
Type: Custom TCP Rule
Port: 6379
Source: <my security group name>
Type: Custom TCP Rule
Port: 6379
Source: 0.0.0.0/0
My Lambda function has the following code.
'use strict';
const Redis = require('redis');
module.exports.hello = async event => {
var redis = Redis.createClient({
port: 6379,
host: '127.0.0.1',
password: ''
});
redis.on('connect', function(){
console.log("Redis client conected : " );
});
redis.set('age', 38, function(err, reply) {
console.log(err);
console.log(reply);
});
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'The lambda function is called..!!',
input: event,
redis: redis.get('age')
},
null,
2
),
};
};
Please let me know where I am going wrong.
First thing, Your lambda trying to connect to localhost so this will not work. You have to place the public or private IP of the Redis instance.
But still, you need to make sure these things
Should in the same VPC as your EC2 instance
Should allow outbound traffic in the security group
Assign subnet
Your instance Allow lambda to connect with Redis in security group
const redis = require('redis');
const redis_client = redis.createClient({
host: 'you_instance_IP',
port: 6379
});
exports.handler = (event, context, callback) => {
redis_client.set("foo", "bar");
redis_client.get("foo", function(err, reply) {
redis_client.unref();
callback(null, reply);
});
};
You can also look into this how-should-i-connect-to-a-redis-instance-from-an-aws-lambda-function
On Linux Ubuntu server 20.04 LTS I was seeing a similar error after reboot of the EC2 server which for our use case runs an express app via a cron job connecting a nodeJs app (installed with nvm) using passport.js to use sessions in Redis:
Redis error: Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
What resolved it for me, as my nodeJs app was running as Ubuntu user I needed to make that path available, was to add to the PATH within /etc/crontab by:
sudo nano /etc/crontab Just comment out the original path in there so you can switch back if required (my original PATH was set to: PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ) and append the location of your bin you may need to refer to, in the format:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/home/ubuntu/.nvm/versions/node/v12.20.0/bin
And the error disappeared for me
// redisInit.js
const session = require('express-session');
const redis = require('redis');
const RedisStore = require('connect-redis')(session);
const { redisSecretKey } = process.env;
const redisClient = redis.createClient();
redisClient.on('error', (err) => {
console.log('Redis error: ', err);
});
const redisSession = session({
secret: redisSecretKey,
name: 'some_redis_store_name',
resave: true,
saveUninitialized: true,
cookie: { secure: false },
store: new RedisStore(
{
host: 'localhost', port: 6379, client: redisClient, ttl: 86400
}
)
});
module.exports = redisSession;

Meteor deploy error (mup): pushing meteor app bundle to server failed

I am trying to deploy a meteor app to an AWS server, but am getting this message:
Started TaskList: Configuring App
[52.41.84.125] - Pushing the Startup Script
nodemiral:sess:52.41.84.125 copy file - src: /
Users/Olivia/.nvm/versions/node/v7.8.0/lib/node_modules/mup/lib/modules/meteor/assets/templates/start.sh, dest: /opt/CanDu/config/start.sh, vars: {"appName":"CanDu","useLocalMongo":0,"port":80,"bind":"0.0.0.0","logConfig":{"opts":{"max-size":"100m","max-file":10}},"docker":{"image":"abernix/meteord:base","imageFrontendServer":"meteorhacks/mup-frontend-server","imagePort":80},"nginxClientUploadLimit":"10M"} +0ms
[52.41.84.125] x Pushing the Startup Script: FAILED Failure
Previously I had been able to deploy using mup, but now I am getting this message. The only major thing I've changed is the Python path in my .noderc. I am also able to SSH into my amazon server directly from the terminal. My mup file is:
module.exports = {
servers: {
one: {
host: '##.##.##.###',
username: 'ec2-user',
pem: '/Users/Olivia/.ssh/oz-pair.pem'
// password:
// or leave blank for authenticate from ssh-agent
}}meteor: {
name: 'CanDu',
path: '/Users/Olivia/repos/bene_candu_v2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
mobileSettings: {
public: {
"astronomer": {
"appId": "<key>",
"disableUserTracking": false,
"disableRouteTracking": false,
"disableMethodTracking": false
},
"googleMaps": "<key>",
"facebook":{
"permissions":["email","public_profile","user_friends"]
}
},
},
},
env: {
ROOT_URL: 'http://ec2-##-##-##-###.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://. . . "
},
/*ssl: {
crt: '/opt/keys/server.crt', // this is a bundle of certificates
key: '/opt/keys/server.key', // this is the private key of the certificate
port: 443, // 443 is the default value and it's the standard HTTPS port
upload: false
},*/
docker: {
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 60
}
};
And I have checked to make sure there are no trailing commas, and have tried increasing the wait time. etc. The error message I'm getting is pretty unhelpful. Does anyone have any insight? Thank you so much!

Cannot deploy ember app in Firebase

I am unable to deploy my ember application in Firebase. I can only see the welcome page of Firebase hosting:
You're seeing this because you've successfully setup Firebase Hosting. Now it's time to go build something extraordinary!
I have installed the EmberFire add-on, as well as the Firebase tool.
My config file looks like this:
module.exports = function(environment) {
var ENV = {
modulePrefix: 'sample',
environment: environment,
rootURL: '/',
locationType: 'auto',
firebase : {
apiKey: 'xxxxxx',
authDomain: 'xxxxx',
databaseURL: 'xxxx',
storageBucket: 'xxxxx',
messagingSenderId: 'xxxxx'
},
EmberENV: {
FEATURES: {
// Here you can enable experimental features on an ember canary build
// e.g. 'with-controller': true
}
},
APP: {
// Here you can pass flags/options to your application instance
// when it is created
}
};
if (environment === 'development') {
// ENV.APP.LOG_RESOLVER = true;
ENV.APP.LOG_ACTIVE_GENERATION = true;
ENV.APP.LOG_TRANSITIONS = true;
ENV.APP.LOG_TRANSITIONS_INTERNAL = true;
ENV.APP.
LOG_VIEW_LOOKUPS = true;
}
Firebase.json:
{
"database": {
"rules": "database.rules.json"
},
"hosting": {
"public": "dist",
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}
}
I have built the app and deployed using following commands:
ember build --prod
firebase login
firebase init
firebase deploy
Thanks in advance :-)
When you initialise your ember.js app with firebase init command for the first time, you will be prompted that
? File dist/index.html already exists. Overwrite? (y/N)
respond with No. Responding with yes will allow the default firebase hosting welcome page override your ember app index.html file, which is why you are still greeted with the firebase hosting welcome page.

Deploying EmberJS to AWS using SSH + RSync

I've managed to deploy a simple todo app unto AWS with S3 using this site
http://emberigniter.com/deploy-ember-cli-app-amazon-s3-linux-ssh-rsync/
However, when I attempt to do this ( Deploying with SSH and Rsync ) according to the tutorial, I run into the following error:
gzipping **/*.{js,css,json,ico,map,xml,txt,svg,eot,ttf,woff,woff2}
ignoring null
✔ assets/ember-user-app-d41d8cd98f00b204e9800998ecf8427e.css
✔ assets/vendor-d41d8cd98f00b204e9800998ecf8427e.css
✔ assets/ember-user-app-45a9825ab0116a8007bb48645b09f060.js
✔ crossdomain.xml
✔ robots.txt
✔ assets/vendor-d008595752c8e859a04200ceb9a77874.js
gzipped 6 files ok
|
+- upload
| |
| +- rsync
- Uploading using rsync...
- Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-47/rsync/io.c(453) [sender=2.6.9]
The following is my config/deploy.js
module.exports = function(deployTarget) {
var ENV = {
build: {
environment: deployTarget
},
's3-index': {
accessKeyId: "<myKeyID>",
secretAccessKey: "<mySecret>",
bucket: "emberjsft",
region: "ap-southeast-1",
allowOverwrite: true
},
's3': {
accessKeyId: "<myKeyID>",
secretAccessKey: "<mySecret>",
bucket: "emberjsft",
region: "ap-southeast-1"
},
'ssh-index': {
remoteDir: "/var/www/",
username: "ec2-user",
host: "ec2-<elastic-ip>.ap-southeast-1.compute.amazonaws.com",
privateKeyFile: "/Users/imac/MY_AWS_PEMFILE.pem",
allowOverwrite: true
},
rsync: {
dest: "/var/www/",
username: "ec2-user",
host: "ec2-<elastic-ip>.ap-southeast-1.compute.amazonaws.com",
delete: false
}
// include other plugin configuration that applies to all deploy targets here
};
if (deployTarget === 'development') {
ENV.build.environment = 'development';
// configure other plugins for development deploy target here
}
if (deployTarget === 'staging') {
ENV.build.environment = 'production';
// configure other plugins for staging deploy target here
}
if (deployTarget === 'production') {
ENV.build.environment = 'production';
// configure other plugins for production deploy target here
}
// Note: if you need to build some configuration asynchronously, you can return
// a promise that resolves with the ENV object instead of returning the
// ENV object synchronously.
return ENV;
};
How should I resolve this issue?
Thanks
I've just spent the last hour fighting the same issue as you. I was able to kind of fix it by using ssh-add /home/user/.ssh/example-key.pem and removing privateKeyFile.
I still get a error thrown after the transfer ends, but can confirm all files successfully transferred to my EC2 box despite the error..
deploy.js
module.exports = function (deployTarget) {
var ENV = {
build: {
environment: deployTarget
},
'ssh-index': {
remoteDir: "/var/www/",
username: "ubuntu",
host: "52.xx.xx.xx",
allowOverwrite: true
},
rsync: {
host: "ubuntu#52.xx.xx.xx",
dest: "/var/www/",
recursive: true,
delete: true
}
};
return ENV;
};
In your deploy.js file you need to place your information for accessKeyId. You left "" in the place of accessKeyId. You need to put your information there. Same for secretAccessKey, acessKeyId, plus your host , you need to put your elastic-ip address.
myKeyID and mySecret shall be present in a .env file and then accessed here by process.env.myKeyID , process.env.mySecret
Not a good practice to hard-code the Keys in deploy.js file.
Best practise would be read it using Consul