I set up a neo4j database on ec2 and am not sure how to access it with my restclient. Firstly, how do I change the username and password once its on ec2 and running? Also, what do I change my localhost to so I can access the server?
This is the example statement I want to know how to configure:
from neo4jrestclient import client
db = client.GraphDatabase("http://localhost:7474", username="neo4j", password="neo4j")
The first action you have to do on a new neo4j instance is changing the password. To do so use your browser and connect to the db using http://:7474. Use default login neo4j/neo4j and change your pw.
Your client app then of course needs to supply the changed password.
Alternatively you can change the pw by REST API as sell, see http://neo4j.com/docs/stable/rest-api-security.html#rest-api-security-user-status-and-password-changing.
Related
I'm trying to get my Flask App Engine project to read data stored in Cloud SQL MySQL 5.7 database. Something has gone wrong as all I've gotten are pymysql.err.OperationalError. I've been following the instructions here: https://cloud.google.com/sql/docs/mysql/connect-app-engine.
The Cloud SQL Admin API is enabled.
According to the linked document:
App Engine uses a service account to authorize your connections to Cloud SQL. This service account must have the correct IAM permissions to successfully connect. Unless otherwise configured, the default service account is in the format service-PROJECT_NUMBER#gae-api-prod.google.com.iam.gserviceaccount.com.
The IAM page listing the permissions for my project doesn't contain a member in the above format. The "App Engine default service account" is of the format: my-project-name#appspot.gserviceaccount.com. This service account has Cloud SQL Client and Editor roles.
While my queries are unsuccessful, after each attempt I note in the Logs Viewer:
7183 [Note] Access denied for user 'www-data'#'cloudsqlproxy~xxx.xxx.xx.xx' (using password: YES)
(IP address redacted). This is somewhat confusing as 'www-data' isn't a user I specified in my code.
The code used to connect:
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://' + ':' + db_user + '#/' + db_name + '?unix_socket=/cloudsql/' + connection_name
Where have I gone wrong and how might I fix it?
This error is a mySQL error when trying to connect to a database with wrong credentials.
Please verify that the values you are using are the right ones.
If you dont rember the database username and password you can change it on the console by following the next steps which are also expalined here
Go to CloudSQL console
Select your database
Go to users
next to the user select click on the three dots
And select Change password
Type the new password and click OK
Mostly likely, you're building and testing your app locally where you're supplying your credentials with a username that has access to Cloud SQL. Upon build, unless you otherwise specify, a default username will be assigned to the app engine instance.
To fix this:
Head over to IAM & admin
Search for the app engine account --> ending with gae-api-prod.google.com.iam.gserviceaccount.com
Edit
Assign permission Cloud SQL Client
Let me know if this solves it for you!
This is the first time i am doing the deployment from my side and am new to AWS. I have a project deployed on Amazon AWS server for testing purpose. I have set the domain name as: https://domain.biz
I have a login page here where once the user logs in successfully i am setting the userId in the session and navigating the user to dashboard.
In the dashboard i have added a function to check whether session set with value of userId. If the session is not set then i am navigating user back to login page. This is to block the unauthorized access to site.
I am facing problem here. when i was working with the server before adding ssl it is working fine.after https://domain.biz session is working one page.and http://domain.biz works fine. When i have added domain to https then the session is not working. What is the problem here with AWS? Am i missing anything.
Check this : Session lost when switching from HTTP to HTTPS in PHP
Since codeignitor is in php, I suppose this thread could solve your issue.
I set data proc using the steps in link here
https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook
But my jyputer keep asking for password
I didn't set any password.
I tried my google account password that doesn't work
I ran ../root$ sudo grep -ir password
and get following, so that confirmed no password is set
.jupyter/jupyter_notebook_config.py:## Hashed password to use for web authentication.
.jupyter/jupyter_notebook_config.py:# The string should be of the form type:salt:hashed-password.
.jupyter/jupyter_notebook_config.py:#c.NotebookApp.password = u''
.jupyter/jupyter_notebook_config.py:# Only used when no password is enabled.
.local/share/jupyter/runtime/nbserver-3668.json: "password": false,
Since the initialization action just installs from latest using conda install jupyter, this appears to have been caused by a recent upstream change, specifically upgrading the notebook component from 4.2.3 to 4.3.0 causing token-based auth to be turned on by default. A recent cluster I deployed a couple weeks ago using the out-of-the-box init action didn't have the same login you're seeing; the design of the init action is to let Google Compute Engine firewalls be your layer of defense and the SSH tunnel being your secure connection, rather than relying on various third-party implementations of auth from the different Hadoop/Spark tools and web UIs.
The solution will be to add a line to setup-jupyter-kernel.sh:
echo "c.NotebookApp.token = u''" >> ~/.jupyter/jupyter_notebook_config.py
to disable jupyter-side authentication altogether and revert to the behavior a couple weeks ago. Note that if you want to do this yourself you'll have to fiddle with the INIT_ACTIONS_REPO and INIT_ACTIONS_BRANCH settings in jupyter.sh which may take some getting used to if you haven't been customizing it already. We'll try to push a fix as soon as possible and once that's done you should be able to use the out-of-the-box init action without causing the login screen again.
If you already have a cluster running, you can disable the auth for your jupyter server by running that manually as root after SSH'ing into the master:
sudo su
killall -9 jupyter-notebook
echo "c.NotebookApp.token = u''" >> ~/.jupyter/jupyter_notebook_config.py
/dataproc-initialization-actions/jupyter/internal/launch-jupyter-kernel.sh
Alternatively, if you do want to keep the new default token-authorization approach, the jupyter server actually logs a generated token to /var/log/jupyter_notebook.log; look for a line stating The Jupyter Notebook is running at: http://[all ip addresses on your system]:8123/?token=[some-token-string-here]; that token string can be plugged in to the password field or in the URL parameter as it shows.
EDIT: The fix has now been committed into Dataproc's init action repository and synced to gs://dataproc-initialization-actions. Deployments out-of-the-box once again work without an extra login page in the Jupyter UI.
A new metadata option has also been added if you do want to specify a token which Jupyter also allows to be used in the password field, with key JUPYTER_AUTH_TOKEN. Use it as follows only if you want a login page requesting your specified token (no metadata keys are necessary if you just want the old behavior of no login page):
gcloud dataproc clusters create \
--initialization-actions gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--metadata JUPYTER_AUTH_TOKEN=foobarbaz
Then your login password will be foobarbaz.
When you dont set any password you can login with the your server credentials where it is installed.
I am using spring boot and created a web application using a mongodb database. Locally I use command prompt "mongod" and "mongo" where I can query the data that I have inputted in the UI.
My current application is using MongoDb running on localhost with default port 27017. My web application reflect the data that is stored in the database. When I push this application to cloud foundry, and bind the MONGODB service, it uses another database. Where and how can I view/access all the data that are being inputted. On the local machine I am able to use db.collection.find() and it queries all of my data.
Problem
Once I push my application to cloud foundry, All my data that was stored locally is not linked with it. I am able to store values into PCF MongoDB. But I do not know how to view the data that I have in it. Is there a command or a method to view all the data that I have inputted into PCF MongoDB?
Attempt
Looking at my VCAP_SERVICES I was able to see my database name, username, and password. But they look like they are encrypted, it has letters numbers and hypens all mixed. Example below how VCAP looks like(replica values, same format)
"database": "9faf201a-39b1-4lse-49242f404g11"
"host": "10.100.100.333"
"password": "2jnkj4nk22kk5lk6kj4n4k6nkj6001"
"username": "401849301k-8g3f-5c3j-k28-583920308592f04"
I tried using the code below in a CLI
mongo someurl.mongodomain.com:45475/database_name -u username -p password
So for databasename, username, and password I simply copied and pasted the encrypted looking username and password
mongo myurl.com:1337/9faf201a-39b1-4lse-49242f404g11 -u 401849301k-8g3f-5c3j-k28-583920308592f04" -p 2jnkj4nk22kk5lk6kj4n4k6nkj6001
and I get a connection failed. Maybe I have to input the correct user name and password. Where can I set a username and password? I am using spring boot and it was automatically handled for me. So that, I never created a username or password.
Here are five suggestions (thanks to Daniel.Mikusa for the last one):
Push a web client for mongodb and bind it to your database. There seems to be a cloudfoundry wrapper for mongo express: https://github.com/komushi/cf-mongo-express .
You may be able to connect to the remote mongodb with the mongo client program in a similar way as you connect to your local mongodb. You can find the credentials (username, password, db-name) in the environment of your app:
cf env <your-app-name>
Access to the mongodb instance may, however, be blocked from machines outside of your CloudFoundry installation. In this case you may want to try the next option.
Push a docker container that has the mongo client installed to cloudfoundry. Ssh into the docker container, and use the mongo client from there. Pushing docker containers to CF is not enabled in all cloudfoundry installations.
Finally you could expose your domain objects via REST-Controllers. Possibly using spring data rest: http://projects.spring.io/spring-data-rest/.
Use cf ssh and an ssh tunnel. Bind the mongodb instance to an app, run cf env to get the host, port and credentials (or make a service key). Then run cf ssh -N -L <localport>:<service-fqdn-or-ip>:<remote-port> app-name (the app you connect to doesn't strictly matter, it's just the one we are tunneling through). Now connect a client to localhost:<localport> and use the credentials you got from cf env.
I developed a webapplication Example1:7575 which uses FBA. Now, I deployed these WSP's to a new server Production:2525 to get the same functionality of my previous server's webapplication. However, I was not able to fetch the data from sql server and I'm getting the following error: A Membership Provider has not been configured correctly. Check the web.config setttings for this web application.
Actually, I have manually entered the same membership and role providers of my previous server's central admin, security service token & web application's web.config entries to this new web.configs and matched them.
Can someone help me with where I might be doing wrong. Any help would be greatly appreciated.
If you can't fetch data from SQL Server there's probably an issue with permissions to the database. Check the database connection string that FBA is using. It likely uses Windows authentication to connect - in which case it will be connecting as the user assigned to the app pool for the web application and the secure token service. Check that the configured app pool identities have permissions to access the sql server databases.