I am trying to run the Rstudio Server on a virtual Instance running Ubuntu on Google Cloud. Given that by default, a user on google cloud does not have a sudo password on Google Cloud, connecting to the server with username/password is not directly available.
Several tutorials (see tuto 1, tuto2 and tuto 3) suggest creating a new user, which will come with username/password, hence allowing to use standard username/password with Rstudio Server. This solution raises however several issues, like access/write permission, different R package folder, difficulties to connect to new user, etc.
Is there a way to connect to the Rstudio server without creating a new user? I guess a possible way would be to add a password to the default user in Google Cloud, although this seems also difficult per se (old thread on this)?
Thanks!
Related
I'm building a desktop client app (win/linux/mac) with a backend hosted in GCP (I'm considering other cloud platforms too). The desktop app should be minimalistic and provide access to local machine resources to the backend. And I'm looking for a way to invoke my app from the server (when some event occurs) and then the app would do some work on a local machine. Here's what I've tried so far.
Google Cloud Pub/Sub. Seems like it does what I need, but to make it work I have to create a service account, generate JSON key and store it locally, which is not good. I can restrict the service account access permissions, of course, but still it doesn't look good to me. Maybe there are other ways to auth my app running at the end user machine? I want to keep my desktop app minimal (ideally without UI, just an "agent" console process / Windows service). Maybe I could consider a login screen to connect the app with the backend, if that solves the problem, but I don't want to overcomplicate.
Google Cloud Run + SignalR / WebSockets. This solution also looks good, but it has one significant disadvantage. As long as there's at least one open WebSocket the Cloud Run instance is considered active and therefore billed. There are other difficulties related to scalability and container instances synchronization too.
What do you think about the options above, and what are the other possibilities? Am I left with REST API and polling for updates? I'm quite new to the cloud stuff so any help is appreciated. Thanks!
If you want to be able to invoke your local app from Google Cloud, you need 2 things
The first one, to register your app on Google Cloud, with, preferably, a auth mecanism (can be an API key for example). Like this, the GCP backend know where to call you app (which IP/port) and how (the auth mechanism)
The second is to have your app up and running and listen external communication. HTTP is the easiest way. Wait a HTTP call on the IP/port defined during the registration, check the auth and perform the process.
You can store the data (location and the auth) in firestore for example, and use Cloud Run to perform the HTTP call.
You can also inverse the solution and to poll (long polling or regular poll) the backend from the local app when it is running.
The 2 approach are possible, the second one slightly easier but can be challenging to manage the security.
I have learned that Memorystore has full support for Redis protocols but there are some limitations. Please help me address the following challenges.
Unlike Redis Memorystore doesn't seem to support Master-Slave provisioning to incorporate Read Write/Read Only specific requests. Is there any workaround?
Existing Redis setup has password protected authentication mechanism in place? How do we enable Auth config for Memorystore?
Existing application level client codes are written in C++. Is there any workaround to leverage the existing codes to connect Memorystore?
Thanks in advance.
I will try to address your questions individually, for better formatting and in case you have further doubts on them.
As you mentioned and confirmed by a Google Agent here, Memorystore doesn't support Master-Slave provision. For now, it doesn't have a workaround for it as well. I believe opening a Feature Request with Google or answering the above Google Group question might be a good option, to receive an official return from Google.
For you to configure authentication in your Memorystore, you need to create a service account and set an environment variable. The steps to achieve that are the following:
In the Cloud Console, go to the Create service account key page.
Go to the Create Service Account Key page
From the Service account list, select New service account.
In the Service account name field, enter a name.
From the Role list, select Project > Owner.
Click Create. A JSON file that contains your key downloads to your computer.
Set the environment variable using the following command - next is an example: export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json".
There isn't official support to C++, however, you can use Client Libraries to connect to the API of the language you want. It might be worth it to give it a try with connecting to C++. I found this repository provided by Google, related to C++, that can be used to connect. This seems to be the only available option.
I hope these answers will help you, clarifying your doubts about the product.
Let me know if the information helped you!
Need suggestions how to automate user login to Amazon WorkSpaces from Ubuntu 18.04 desktops.
We're a small Engineering shop of 20 users all using Ubuntu 18.04 desktops to connect to Amazon WorkSpaces (mix of Windows and Linux). Since there isn't a WorkSpaces client yet for Linux, we use the Windows version over WINE.
Our Intranet portal allows for somewhat automated login process where clicking a Connect button does 4 things:
Use the URI syntax workspaces://username#registrationcode to launch WorkSpaces Client.
Display the username, registration code, and disposable password in the Intranet page.
Populate Username and Registration Code in the WorkSpaces Client.
Copy password to clipboard.
Details in https://docs.aws.amazon.com/workspaces/latest/adminguide/customize-workspaces-user-login.html
User would still need to copy password from Intranet page and paste to WorkSpaces Client to complete login. We're trying to eliminate this step as users are in & out of WorkSpaces multiple times a day.
I'm considering zenity but unsure if this is the correct approach.
Please suggest options in Ubuntu 18.04 to automate auto-pasting password to WorkSpaces Client.
There is a native Linux Client https://clients.amazonworkspaces.com/linux-install.html and it looks promising. I'm just trying to get it going myself and can log in but I'm getting an error connecting to the desktop. It might be the Ubuntu image we're using.
I had few questions that might be easy for others, but that I couldn't wrap my head around.
In developing a "PRODUCTION LEVEL" full-stack web application(node.js/react/webpack),
1) Where do you set up your database? (when developing, I'm using apache couchdb running on my localhost, but when deployed, is cloud database(cloudant) the only solution? or am i missing something?)
2) Is it recommended to deploy my server(node.js) to either digital ocean/aws/heroku, AND set up a third party database else where? (in my case, I'm have to use either Digital Ocean/Aliyun(Chinese Web Service), but they don't seem to have a database package that comes with couchDB) - What is the practical solution for production level application?
3) If cloud database is the practical solution, What Do I do if there is no database storage center for CouchDB located in China? is there a cloud database storage that universally saves all noSQL data regardless of your type of DB? (mongoDB, couchDB, etc.,)
4) AWS/Heroku provides add-ons where you can connect cloud database to my application, does this make the speed of my application faster? For Digital Ocean, it shows article about setting up CouchDB with their service, but does that mean that database will be available for my users to access? or is that just for development purposes
5) Where and How does "Docker" come in to play to help in my situation?
Sincerely,
I cannot say for CouchDB, but I have hosted multiple web applications on AWS using their RDS Database (MySQL). The service you choose (AWS/DO/Heroku), depends on your application and your requirements (pricing etc).
I don't think AWS has a package for MongoDB, but there is a third party service MongoLabs, which can host the MongoDB Database, I bet there would be some out there for CouchDB too.
Or if you cannot get a third party hosting, consider installing the database on your server itself. Getting a VPS from either DO or AWS and setting it up yourself could be an option in that case. The link you mentioned in your last paragraph would help you here. And yes, if you use that and let node connect to it, you can use it just like any other cloud based database, just that it would be on your server.
I haven't used Docker, so cannot say if and how that could help
UPDATE: (reply to comment)
A VPS is a server in the cloud. You don't set up the database on your local computer, no one can access that. You set up your database on the VPS (in the cloud) and then everyone can access that.
A VPS is like your own clean copy of server (ubuntu/fedora) in the cloud, so you can pretty much do anything on it, like your local computer. So basically your database would also be in the cloud.
There are actually 2 ways you could do that.
Get a VPS, install your database and set up your node.js server on the same VPS. Your node application would access the database on the same VPS.
Get a VPS specially for the database, and set up your node.js on another VPS, this would separate the database and node app on two different servers.
To answer part of your question... if you set up a CouchDB server on Digital Ocean (or on AWS, Azure, Google Cloud etc) it will be available to your production users, not just you. You will want to set up security/firewall to limit who can access your server of course.
Cloudant provide CouchDb as a service, in other words you would not have to install the software or manage a server.
With Digital Ocean/AWS/Azure/Google it is down to you to manage the virtual server and the database/other software on it. You can install CouchDb on any of these services and you can install both NodeJS and CouchDb on the same virtual server if you wish.
Bitnami have a CouchDb package that you can use to deploy CouchDb on to several of the major hosting companies, which makes the setup process easier.
I see that AWS and Azure have data centres in China, but at the moment Digital Ocean do not as far as I am aware. I hope this helps
After not getting much help on the last question, I decided to blow away the VM and re-create it as I already lost a week on this issue. And of course still issues, btu a little different.
I am using WSS on a 2008 server. I removed from the SharePoint admin the blocked asmx page types. I am using the administrator account with password and the domain, which is the IP of the VM machine. Normally I would never recommend using the admin account, but since I am just running a test to connect to SharePoint web services, so be it.
When accessing this site via a webbrowser, no issues whatsoever.
When accessing the web services from the browser using the admin credentials, no problem.
Then when trying to access the web service via Visual Studio I get the windows security dialog;
Followed by a discovery credential for the list;
Followed by another Discovery Credential for access to the error.aspx page, but as you can see, I can see the list of services for lists.asmx;
Followed by yet another Discovery Credential asking for permission to the $metadata and this just continues continually -it will NEVER authenticate via visual studio 2010;
And then, of course, when the code is ran, what do we get - ACCESS DENIED.
Call made; code not listed makes connection.
Make call to service:
And receive the error.
And IIS for SharePoint is set top Windows Authentication and Impersonate. All defaults.
It has now been going on 5 days; does anyone at all have any clue as to what is causing this? I have used this code and technique for years with Windows Server 2003 and WSS 2.0 and / or MOSS 2007 connecting from remote machines and NEVER, I mean NEVER had issues like this.
I would really appreciate any help.