I'm using docker-compose to serve django at 0.0.0.0:80 and webpack-dev-server at 0.0.0.0:3000. They're both work perfectly at 0.0.0.0
I also have domain bound to my external IP and I even can access django from this domain. But somehow, I can't access webpack-dev-server, neither by external IP, nor by domain.
Here is some additional data:
docker-compose.yml
web:
build: backend/
command: sh dockerfiles/init.sh
ports:
- 0.0.0.0:80:80
js:
build: webclient/
command: yarn runserver
ports:
- 0.0.0.0:3000:3000
As you see, they are both served same way here
server.js
new WebpackDevServer(
webpack(config),
{
publicPath: config.output.publicPath,
hot: true,
historyApiFallback: true,
allowedHosts: [
'0.0.0.0',
'localhost',
'DOMAIN',
'*.DOMAIN'
],
headers: {
"Access-Control-Allow-Origin": "*"
}
}
).listen(3000, '0.0.0.0', function (err, result) {
if (err) {
console.log(err)
}
console.log('Listening at 0.0.0.0:3000')
})
When I ping the port 0.0.0.0:3000 - the port is open. When i ping DOMAIN:3000 - the port is closed.
Do you have any ideas what's going on?
you need to use disableHostCheck:
{
publicPath: config.output.publicPath,
hot: true,
historyApiFallback: true,
disableHostCheck: true, // <------ here is the missing piece
allowedHosts: [
'0.0.0.0',
'localhost',
'DOMAIN',
'*.DOMAIN'
],
headers: {
"Access-Control-Allow-Origin": "*"
}
}
Related
I am trying to make cube.js production mode in Docker container work.
But I am getting
Error: connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1159:16)
I have my redis:
docker run -e ALLOW_EMPTY_PASSWORD=yes --name my-redis -p 6379:6379 -d redis
I have my cubestore:
docker run -p 3030:3030 cubejs/cubestore
my cube.js:
module.exports = {
jwt: {
key: 'key',
},
contextToAppId: ({ securityContext }) =>
`CUBEJS_APP_${securityContext.username}`,
scheduledRefreshContexts: async () => [
{
securityContext: {
username: 'public'
},
},
{
securityContext: {
username: 'obondar',
},
},
],
};
.env
CUBEJS_DB_HOST=server
CUBEJS_DB_PORT=5432
CUBEJS_DB_NAME=db
CUBEJS_DB_USER=name
CUBEJS_DB_PASS=password
CUBEJS_DB_TYPE=postgres
CUBEJS_SCHEDULED_REFRESH_DEFAULT=true
CUBEJS_API_SECRET=key
CUBEJS_CUBESTORE_HOST=localhost
What am I missing?
Can someone help, please?
So I ran sudo docker-compose up with the following .yaml file:
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- "4563-4599:4563-4599"
- "8080:8080"
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
- SERVICES=s3,es,s3,ssm
- DEFAULT_REGION=us-east-1
- DATA_DIR=.localstack
- AWS_ENDPOINT=http://localstack:4566
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/localstack:/tmp/localstack
networks:
- my_localstack_network
networks:
my_localstack_network:
Then I created a ES domain:
aws es create-elasticsearch-domain --domain-name MyEsDomain --endpoint-url=http://localhost:4566
and getting the following output:
{
"DomainStatus": {
"DomainId": "000000000000/MyEsDomain",
"DomainName": "MyEsDomain",
"ARN": "arn:aws:es:us-east-1:000000000000:domain/MyEsDomain",
"Created": true,
"Deleted": false,
"Endpoint": "MyEsDomain.us-east-1.es.localhost.localstack.cloud:4566",
"Processing": true,
"UpgradeProcessing": false,
"ElasticsearchVersion": "7.10",
"ElasticsearchClusterConfig": {
"InstanceType": "m3.medium.elasticsearch",
"InstanceCount": 1,
"DedicatedMasterEnabled": true,
"ZoneAwarenessEnabled": false,
"DedicatedMasterType": "m3.medium.elasticsearch",
"DedicatedMasterCount": 1,
"WarmEnabled": false
},
...
When I try to hit the ES server thru port 4571, I'm getting "empty reply"
curl localhost:4571
curl: (52) Empty reply from server
I also tried to hit port 4566, and getting back {"status": "running"}.
Look like Elasticesearch never start on my machine.
localstack version > 0.14.0 removed 4571 port, see https://github.com/localstack/localstack/releases/tag/v0.14.0
Try using localstack/localstack-full image.
Localstack/localstack is the light version that does not include elasticsearch.
Trying to run an Flask project which uses grunt.
Gruntfile.js has following configuration:
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
//hostname: 'localhost',
hostname: '0.0.0.0',
livereload: 35728
},
proxies: [{
context: '/api',
host: 'backend',
port: 5000,
changeOrigin: true
}],
app.py has following:
app.run(host='127.0.0.1', port='9000', debug=True) #host='0.0.0.0'
ServerURL has following configuration:
.constant('serverURL', 'http://127.0.0.1:9000/api');
Client shows this:
Started connect web server on http://0.0.0.0:9000
But in Client window I receive this:
Running "watch" task
Waiting...
>> Proxy error: ENOTFOUND
>> Proxy error: ENOTFOUND
Could anyone tell me what is the reason behind this?
In the proxies, try changing the host from backend to 0.0.0.0.
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
//hostname: 'localhost',
hostname: '0.0.0.0',
livereload: 35728
},
proxies: [{
context: '/api',
host: '0.0.0.0',
port: 5000,
changeOrigin: true
}],
I am trying to connect to local redis database on EC2 instance from a lambda function. However when I try to execute the code, I get the following error in the logs
{
"errorType": "Error",
"errorMessage": "Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
"code": "ECONNREFUSED",
"stack": [
"Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
" at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)"
],
"errno": "ECONNREFUSED",
"syscall": "connect",
"address": "127.0.0.1",
"port": 6379
}
The security group has the following entries
Type: Custom TCP Rule
Port: 6379
Source: <my security group name>
Type: Custom TCP Rule
Port: 6379
Source: 0.0.0.0/0
My Lambda function has the following code.
'use strict';
const Redis = require('redis');
module.exports.hello = async event => {
var redis = Redis.createClient({
port: 6379,
host: '127.0.0.1',
password: ''
});
redis.on('connect', function(){
console.log("Redis client conected : " );
});
redis.set('age', 38, function(err, reply) {
console.log(err);
console.log(reply);
});
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'The lambda function is called..!!',
input: event,
redis: redis.get('age')
},
null,
2
),
};
};
Please let me know where I am going wrong.
First thing, Your lambda trying to connect to localhost so this will not work. You have to place the public or private IP of the Redis instance.
But still, you need to make sure these things
Should in the same VPC as your EC2 instance
Should allow outbound traffic in the security group
Assign subnet
Your instance Allow lambda to connect with Redis in security group
const redis = require('redis');
const redis_client = redis.createClient({
host: 'you_instance_IP',
port: 6379
});
exports.handler = (event, context, callback) => {
redis_client.set("foo", "bar");
redis_client.get("foo", function(err, reply) {
redis_client.unref();
callback(null, reply);
});
};
You can also look into this how-should-i-connect-to-a-redis-instance-from-an-aws-lambda-function
On Linux Ubuntu server 20.04 LTS I was seeing a similar error after reboot of the EC2 server which for our use case runs an express app via a cron job connecting a nodeJs app (installed with nvm) using passport.js to use sessions in Redis:
Redis error: Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
What resolved it for me, as my nodeJs app was running as Ubuntu user I needed to make that path available, was to add to the PATH within /etc/crontab by:
sudo nano /etc/crontab Just comment out the original path in there so you can switch back if required (my original PATH was set to: PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ) and append the location of your bin you may need to refer to, in the format:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/home/ubuntu/.nvm/versions/node/v12.20.0/bin
And the error disappeared for me
// redisInit.js
const session = require('express-session');
const redis = require('redis');
const RedisStore = require('connect-redis')(session);
const { redisSecretKey } = process.env;
const redisClient = redis.createClient();
redisClient.on('error', (err) => {
console.log('Redis error: ', err);
});
const redisSession = session({
secret: redisSecretKey,
name: 'some_redis_store_name',
resave: true,
saveUninitialized: true,
cookie: { secure: false },
store: new RedisStore(
{
host: 'localhost', port: 6379, client: redisClient, ttl: 86400
}
)
});
module.exports = redisSession;
I am trying to deploy a meteor app to an AWS server, but am getting this message:
Started TaskList: Configuring App
[52.41.84.125] - Pushing the Startup Script
nodemiral:sess:52.41.84.125 copy file - src: /
Users/Olivia/.nvm/versions/node/v7.8.0/lib/node_modules/mup/lib/modules/meteor/assets/templates/start.sh, dest: /opt/CanDu/config/start.sh, vars: {"appName":"CanDu","useLocalMongo":0,"port":80,"bind":"0.0.0.0","logConfig":{"opts":{"max-size":"100m","max-file":10}},"docker":{"image":"abernix/meteord:base","imageFrontendServer":"meteorhacks/mup-frontend-server","imagePort":80},"nginxClientUploadLimit":"10M"} +0ms
[52.41.84.125] x Pushing the Startup Script: FAILED Failure
Previously I had been able to deploy using mup, but now I am getting this message. The only major thing I've changed is the Python path in my .noderc. I am also able to SSH into my amazon server directly from the terminal. My mup file is:
module.exports = {
servers: {
one: {
host: '##.##.##.###',
username: 'ec2-user',
pem: '/Users/Olivia/.ssh/oz-pair.pem'
// password:
// or leave blank for authenticate from ssh-agent
}}meteor: {
name: 'CanDu',
path: '/Users/Olivia/repos/bene_candu_v2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
mobileSettings: {
public: {
"astronomer": {
"appId": "<key>",
"disableUserTracking": false,
"disableRouteTracking": false,
"disableMethodTracking": false
},
"googleMaps": "<key>",
"facebook":{
"permissions":["email","public_profile","user_friends"]
}
},
},
},
env: {
ROOT_URL: 'http://ec2-##-##-##-###.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://. . . "
},
/*ssl: {
crt: '/opt/keys/server.crt', // this is a bundle of certificates
key: '/opt/keys/server.key', // this is the private key of the certificate
port: 443, // 443 is the default value and it's the standard HTTPS port
upload: false
},*/
docker: {
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 60
}
};
And I have checked to make sure there are no trailing commas, and have tried increasing the wait time. etc. The error message I'm getting is pretty unhelpful. Does anyone have any insight? Thank you so much!