I am using spring boot and created a web application using a mongodb database. Locally I use command prompt "mongod" and "mongo" where I can query the data that I have inputted in the UI.
My current application is using MongoDb running on localhost with default port 27017. My web application reflect the data that is stored in the database. When I push this application to cloud foundry, and bind the MONGODB service, it uses another database. Where and how can I view/access all the data that are being inputted. On the local machine I am able to use db.collection.find() and it queries all of my data.
Problem
Once I push my application to cloud foundry, All my data that was stored locally is not linked with it. I am able to store values into PCF MongoDB. But I do not know how to view the data that I have in it. Is there a command or a method to view all the data that I have inputted into PCF MongoDB?
Attempt
Looking at my VCAP_SERVICES I was able to see my database name, username, and password. But they look like they are encrypted, it has letters numbers and hypens all mixed. Example below how VCAP looks like(replica values, same format)
"database": "9faf201a-39b1-4lse-49242f404g11"
"host": "10.100.100.333"
"password": "2jnkj4nk22kk5lk6kj4n4k6nkj6001"
"username": "401849301k-8g3f-5c3j-k28-583920308592f04"
I tried using the code below in a CLI
mongo someurl.mongodomain.com:45475/database_name -u username -p password
So for databasename, username, and password I simply copied and pasted the encrypted looking username and password
mongo myurl.com:1337/9faf201a-39b1-4lse-49242f404g11 -u 401849301k-8g3f-5c3j-k28-583920308592f04" -p 2jnkj4nk22kk5lk6kj4n4k6nkj6001
and I get a connection failed. Maybe I have to input the correct user name and password. Where can I set a username and password? I am using spring boot and it was automatically handled for me. So that, I never created a username or password.
Here are five suggestions (thanks to Daniel.Mikusa for the last one):
Push a web client for mongodb and bind it to your database. There seems to be a cloudfoundry wrapper for mongo express: https://github.com/komushi/cf-mongo-express .
You may be able to connect to the remote mongodb with the mongo client program in a similar way as you connect to your local mongodb. You can find the credentials (username, password, db-name) in the environment of your app:
cf env <your-app-name>
Access to the mongodb instance may, however, be blocked from machines outside of your CloudFoundry installation. In this case you may want to try the next option.
Push a docker container that has the mongo client installed to cloudfoundry. Ssh into the docker container, and use the mongo client from there. Pushing docker containers to CF is not enabled in all cloudfoundry installations.
Finally you could expose your domain objects via REST-Controllers. Possibly using spring data rest: http://projects.spring.io/spring-data-rest/.
Use cf ssh and an ssh tunnel. Bind the mongodb instance to an app, run cf env to get the host, port and credentials (or make a service key). Then run cf ssh -N -L <localport>:<service-fqdn-or-ip>:<remote-port> app-name (the app you connect to doesn't strictly matter, it's just the one we are tunneling through). Now connect a client to localhost:<localport> and use the credentials you got from cf env.
Related
I'm using Python to interact with Google Sheet API using a library called gspread_pandas.
The flow to let this python script to authenticate on google is basically:
Create a project on GCP;
Create an OAuth2 token (the application type is "web app" , but in doubt if should be
"desktop app");
Expose the token on local environment and run the python script;
The Python Script will ask to access a link like below:
https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=foo.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8182%2F&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets&state=acbdef&access_type=offline
And if I succeed on login, it runs the script smoothly (the data from sheets are retrieved by the python script).
Link to the description of authentication process.
It runs fine if I'm running it locally on my machine, when I setup this workflow on EC2,
when I run the python script on the EC2 terminal, it asks me to access an external link (accounts.google...) and when I click on it, I open the browser on my local machine, I insert my google credentials and at the end of the process it throws:
This site can’t be reachedlocalhost refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
The terminal on EC2 keeps stuck waiting for the login process to end but it already failed on my local browser (on my machine).
My suspicion is that when I log on google using my local browser, it can't get back to the EC2 and finish the authentication process.
The type of this OAuth token is "Web Application" and the URI redirect I set https://<public-ip>.sslip.io/8182 (so I can use the aws public ip on gcp, for testing only).
I wonder how to use Google OAuth on a remove server and login to google account on my local browser. I'm trying this method because the library docs says its possible:
you will have to authenticate through a text based OAuth prompt; this makes it possible to run on a headless server through ssh or through a Jupyter notebook
Link to citation
I tried to use service-account but my company does not allow this kind of auth.
I have two servers on a Windows domain that include an active directory that is correctly configured to allow users to login using smartcard credentials. Currently, I can login to server1 and run remote PowerShell commands on server2 using smartcard credentials through WinRM, without any problem.
I would like to build some sort of web service (preferably on node.js) on server1, so that it presents a user with a webpage that prompts for smartcard credentials. Using these credentials, server1 would be able to run remote PowerShell commands on server2.
Is this possible? I saw some references to pcsclite on other posts. Is this all I need? If so, could someone provide a code snip-it of something that could accomplish this?
Maybe a simpler question that could help me get started, would be how could I even use these credentials to connect to a file share on server2 and download a file?
Thanks!
Look int using WAP in IIS to set up a gateway and use Active Directory Client certificate mapping
I set data proc using the steps in link here
https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook
But my jyputer keep asking for password
I didn't set any password.
I tried my google account password that doesn't work
I ran ../root$ sudo grep -ir password
and get following, so that confirmed no password is set
.jupyter/jupyter_notebook_config.py:## Hashed password to use for web authentication.
.jupyter/jupyter_notebook_config.py:# The string should be of the form type:salt:hashed-password.
.jupyter/jupyter_notebook_config.py:#c.NotebookApp.password = u''
.jupyter/jupyter_notebook_config.py:# Only used when no password is enabled.
.local/share/jupyter/runtime/nbserver-3668.json: "password": false,
Since the initialization action just installs from latest using conda install jupyter, this appears to have been caused by a recent upstream change, specifically upgrading the notebook component from 4.2.3 to 4.3.0 causing token-based auth to be turned on by default. A recent cluster I deployed a couple weeks ago using the out-of-the-box init action didn't have the same login you're seeing; the design of the init action is to let Google Compute Engine firewalls be your layer of defense and the SSH tunnel being your secure connection, rather than relying on various third-party implementations of auth from the different Hadoop/Spark tools and web UIs.
The solution will be to add a line to setup-jupyter-kernel.sh:
echo "c.NotebookApp.token = u''" >> ~/.jupyter/jupyter_notebook_config.py
to disable jupyter-side authentication altogether and revert to the behavior a couple weeks ago. Note that if you want to do this yourself you'll have to fiddle with the INIT_ACTIONS_REPO and INIT_ACTIONS_BRANCH settings in jupyter.sh which may take some getting used to if you haven't been customizing it already. We'll try to push a fix as soon as possible and once that's done you should be able to use the out-of-the-box init action without causing the login screen again.
If you already have a cluster running, you can disable the auth for your jupyter server by running that manually as root after SSH'ing into the master:
sudo su
killall -9 jupyter-notebook
echo "c.NotebookApp.token = u''" >> ~/.jupyter/jupyter_notebook_config.py
/dataproc-initialization-actions/jupyter/internal/launch-jupyter-kernel.sh
Alternatively, if you do want to keep the new default token-authorization approach, the jupyter server actually logs a generated token to /var/log/jupyter_notebook.log; look for a line stating The Jupyter Notebook is running at: http://[all ip addresses on your system]:8123/?token=[some-token-string-here]; that token string can be plugged in to the password field or in the URL parameter as it shows.
EDIT: The fix has now been committed into Dataproc's init action repository and synced to gs://dataproc-initialization-actions. Deployments out-of-the-box once again work without an extra login page in the Jupyter UI.
A new metadata option has also been added if you do want to specify a token which Jupyter also allows to be used in the password field, with key JUPYTER_AUTH_TOKEN. Use it as follows only if you want a login page requesting your specified token (no metadata keys are necessary if you just want the old behavior of no login page):
gcloud dataproc clusters create \
--initialization-actions gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--metadata JUPYTER_AUTH_TOKEN=foobarbaz
Then your login password will be foobarbaz.
When you dont set any password you can login with the your server credentials where it is installed.
I set up a neo4j database on ec2 and am not sure how to access it with my restclient. Firstly, how do I change the username and password once its on ec2 and running? Also, what do I change my localhost to so I can access the server?
This is the example statement I want to know how to configure:
from neo4jrestclient import client
db = client.GraphDatabase("http://localhost:7474", username="neo4j", password="neo4j")
The first action you have to do on a new neo4j instance is changing the password. To do so use your browser and connect to the db using http://:7474. Use default login neo4j/neo4j and change your pw.
Your client app then of course needs to supply the changed password.
Alternatively you can change the pw by REST API as sell, see http://neo4j.com/docs/stable/rest-api-security.html#rest-api-security-user-status-and-password-changing.
I developed a webapplication Example1:7575 which uses FBA. Now, I deployed these WSP's to a new server Production:2525 to get the same functionality of my previous server's webapplication. However, I was not able to fetch the data from sql server and I'm getting the following error: A Membership Provider has not been configured correctly. Check the web.config setttings for this web application.
Actually, I have manually entered the same membership and role providers of my previous server's central admin, security service token & web application's web.config entries to this new web.configs and matched them.
Can someone help me with where I might be doing wrong. Any help would be greatly appreciated.
If you can't fetch data from SQL Server there's probably an issue with permissions to the database. Check the database connection string that FBA is using. It likely uses Windows authentication to connect - in which case it will be connecting as the user assigned to the app pool for the web application and the secure token service. Check that the configured app pool identities have permissions to access the sql server databases.