PredictionIO unnable to get engine running on AWS - amazon-web-services

Im trying to deploy my classification engine following the tutorial on AWS.
In localhost, you deploy Event Server in port 7070 and then an engine in port 8000, but in AWS you have Event Server running, and "pio deploy" tries to deploy the engine in 0.0.0.0:8000, or if I try to make an inquiry to my DNS:7070 I get:
curl -H "Content-Type: application/json" -d '{ "attr0":2, "attr1":0, "attr2":0 }' http://MYDNS:8000/queries.json
curl: (7) Failed to connect to MYDNS port 8000: Connection refused
How is the way to get a correct deployment of the engines and make an inquiry using AWS ?
Thanks for any help :)

Looks like your question is answered on google groups: https://groups.google.com/forum/#!topic/predictionio-user/13dveknEVJw

Related

Solr 9 UI not loading but working with CLI

Recently updated the solr version from 8.4.1 to 9.0.0 on EC2 Linux AMI 2.
I'm getting the result when i'm using the cli using the localhost domain-> curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET "http://localhost:8983/solr/core_name/select?q=(fieldname%3A\fieldvalue%20)&start=0".
But when I try to use the EC2's elastic IP it says the connection refused. It was working well with the previous version. I configured it locally before trying in EC2, it worked fine in local but not on EC2.Not sure what's missing.
Scrrenshot of browser response
Solr Status
Solr status screenshot
Uncommented and changed the SOLR_JETTY_HOST from 127.0.0.1 to 0.0.0.0 in solr.in.sh file. Deleted everything and reindexed the core after updating the jetty host.

GCE SQL proxy connecting to wrong sql ip

I am having a strange issue with my GCE Proxy.
I used to have a docker image with an application that would use the GCE proxy to connect to the mysql database(second generation). Everything worked fine, but I had to stop the services for like a month.
Now I need them back up and for some reason I am not able to connect to the dabase(configuration did not basically chang, and I am using the same docker image with the code)
On closer inspection I see in logs:
Caused by: java.sql.SQLException: Access denied for user 'my-usr'#'cloudsqlproxy~SOME_IP' (using password: YES)
The problem is, that the "SOME_IP" is not actually the sql instance IP and I have no idea from where that IP is coming from.
Does anyone have an idea on how to fix this issue?
I did try to:
-recreate the database user
-recreate the service account
Any advice is welcomed
You can use Cloud SQL proxy to connect your mysql instance, see the step by step below:
Download the proxy:
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
Make the proxy executable:
chmod +x cloud_sql_proxy
Using the proxy to connect to multiple instances
./cloud_sql_proxy -instances=yourProject:us-central1:myInstance=tcp:3306,yourProject:us-central1:myInstance2=tcp:3307 &
mysql -u myUser --host 127.0.0.1 --port 3307
Try to connect your database
mysql -h127.0.0.1 -u$YOUR_CLOUD_SQL_USER -p$YOUR_CLOUD_SQL_PASSWORD
Hoping it helps you!

Use Storybook server on AWS C9?

I'm trying to run a storybook server on AWS Cloud9 but the URL it gives doesn't load anything.
I'm starting the server with
start-storybook -h $HOST -p $PORT --ci
This runs through without error and gives me a "server started" message with a URL. But that URL doesn't connect to anything.
I do notice that the URL isn't secure, and I can imagine AWS having issue with that. There is an --https option on the start-storybook command, but it requires SSL information that I don't know how to source.
Anyone know how this I can get this working?
C9 only opens port 8080, 8081, 8082. So your server should be listening on one of three. Try:
start-storybook -p 8080 s public

cf create-service-broker fails with connection refused

I'm experimenting with CF in my local bosh-lite setup.
The apps that I deploy into if work well. I am now trying to follow the steps here
https://github.com/cf-platform-eng/cf-community-workshop/blob/master/demos/service-broker-lab.adoc
to try out the custom service broker setup.
The https://github.com/mstine/haash-broker application starts and is running fine:
$ cf apps
name requested state instances memory disk urls
haash-broker started 1/1 768M 1G haash-broker.vbox.mojito, haash-broker.192.168.50.6.xip.io
I can access it from my host machine browser well:
http://haash-broker.192.168.50.6.xip.io/v2/catalog
But when I execute the
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
I get
$ cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
Creating service broker haash-broker as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker could not be reached: http://haash-broker.192.168.50.6.xip.io/v2/catalog
When I log in into the CC VM:
$ bosh -e vbox -f cf ssh api/eb4cec99-bab1-4513-a980-fb92775ac2d8
I can ping the hostname:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ sudo ping haash-broker.192.168.50.6.xip.io
PING haash-broker.192.168.50.6.xip.io (192.168.50.6) 56(84) bytes of data.
64 bytes from 192.168.50.6: icmp_seq=1 ttl=64 time=0.080 ms
But wget connection gets refused:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ wget http://warreng:natedogg#haash-broker.192.168.50.6.xip.io/v2/catalog
--2018-04-06 04:19:05-- http://warreng:*password*#haash-broker.192.168.50.6.xip.io/v2/catalog
Resolving haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)... 192.168.50.6
Connecting to haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)|192.168.50.6|:80... failed: Connection refused.
The firewall permits everything on that VM (sudo iptables -L).
The hostname gets resolved properly. The ping works and the 80 port is open on the target IP, since I can reach it from my host browser.
How can that be that the wget doesn't work in such situation?
This also seems to be the reason for me failing to create a service broker cf create-service-broker
UPDATE
I've managed to to execute the cf create-service-broker command with URL of an nginx reverse proxy running outside of my bosh-lite environment. The proxy redirects to the same initial URL http://haash-broker.192.168.50.6.xip.io
and the command succeeds in this way.
But the subsequent
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.1.xip.io:9999
cf enable-service-access haash
cf create-service HaaSh basic my-hash
(where haash-broker.192.168.50.1.xip.io:9999 is my nginx proxy) fails with
Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://haash-broker.192.168.50.1.xip.io:9999/v2/service_instances/4ef19154-d238-4cb3-8003-803fba53af3f?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {"timestamp":1523008856993,"error":"Bad Request","status":400,"message":""}
I can see in both nginx and broker app logs that the the request reaches the broker and it answers with 400.
Debugging now why.
Can you post the result of --server-response option used with wget? Also what happens when you try to curl the broker?
Broker requires credentials, but it is interesting if it responds with 401 or 500 on the first request that wget makes without credentials.

Cloudfoundry's cf login yields a "connection refused" error message

I am experiencing issues with the latest bosh-lite virtual box machine. See here.
I have just downloaded the Vagrantfile and done a
vagrant up
Then a:
cf login -u admin -a 192.168.50.4 -p admin
But it give me a:
API endpoint: 192.168.50.4
FAILED
connection refused
Can anyone please help?
Get address of haproxy by logging to it (bosh ssh, then ifconfig). Use address of haproxy as api endpoint.