First of all I want to create a folder named as ".ssh" in the cluster and after the creation of it I want it to be there permanently, this folder will be created in the directory i.e. "/home/ssm-user/" and then I want to place a private key in this folder which also needs to be permanent.
Currently I have to do this task manually every time I launch the cluster.
I have to open the aws-cli through session manager and then navigate to the directory mentioned above, then I create a folder ".ssh" with command "mkdir", once the folder has been created then I put my file in it in order to use that file.
But as already mentioned I only want to do this procedure once and after that it must be there whenever I launch my cluster.
Or, if there is any other process to craft that folder and place the file in it with the launch then it will really very helpful.
This seems straightforward to do that passing the Service Account key file (generated from the GCP console) by specifying the file location in the application.properties file. However, I tried all the following options:
1. spring.cloud.gcp.credentials.location=file:/home/my_user_id/mp6key.json
2. spring.cloud.gcp.credentials.location=file:src/main/resources/mp6key.json
3. spring.cloud.gcp.credentials.location=file:./main/resources/mp6key.json
4. spring.cloud.gcp.credentials.location=file:/src/main/resources/mp6key.json
It all ended up with the same error:
java.io.FileNotFoundException: /home/my_user_id/mp6key.json (No such file or directory)
Could anyone advise where I should put the key file and then how should I specify the path to the file properly?
The same programs run successfully in Ecplise with messages published and subscribed using the Pub/Sub processing from GCP (using the Project Id/Service Account key generated in GCP), but now stuck with the above issue after deployed to run on GCP.
As mentioned in the official documentation, the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc.
for example, if the service account key file is stored in the classpath as src/main/resources/key.json, pass the following property
spring.cloud.gcp.credentials.location=classpath:key.json
if the key file is stored somewhere else in your local file system, use the file prefix in the property value
spring.cloud.gcp.credentials.location=file:<path to key file>
My line looks like this:
spring.cloud.gcp.credentials.location=file:src/main/resources/[my_json_file]
And this works.
The following also works if I put it in the root of the project directory:
spring.cloud.gcp.credentials.location=file:./[my_json_file]
Have you tried to follow this quickstart? Please, try to follow it thoughtfully and explain if you get any error finishing the quickstart.
Anyway, before running your Java script, try running on the console the following (please modify with the exact path where you store your key):
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/mp6key.json"
How are you authenticating your credentials in your Java script?
My answer is easy: if you run you code on GCP, you don't have to use service account key file. Problem eliminated, problem solved!
More seriously, have a look on service identity. I don't know what is your current service (Compute? Function? Cloud Run?). Anyway, you can attach any service account on GCP components. Then, when you code, simply use the default credential. Automatically the component identity is loaded. No key to manage, no key to store securely, no key to rotate!
If you provide more detail on your target platform, I could provide your some guidance to achieve this.
Keep in mind that the service account key file are designed to be used by automatic apps (w/o user account involved) hosted outside GCP (on prem, other Cloud Provider, a CI/CD, Apigee,...)
UPDATE
When you use your personal account, you can also use the default credential.
Install gcloud SDK on your computer
Use the command gcloud auth application-default login
Follow the instructions
Enjoy!
If it doesn't work, get the <path> displayed after the login command and set this value in the environment variable named GOOGLE_APPLICATION_CREDENTIALS.
If you definitively want to use service account key file (which are a security issue for the previous reason, but...), you can use it locally
Either set the json key file path into the GOOGLE_APPLICATION_CREDENTIALS environment variable
Or run this command gcloud auth activate-service-account --key-file=<path to your json key file>
Provided your file is in the resources folder try
file://mp6key.json
using file:// instead of file:/ works for me at least
My host is AWS and I need to update my cs-cart from 4.9.3.SP1 to 4.10.1 and I tried to execute the update but get below error for FTP:
Permissions issue These files require write permissions set (manually
or automatically via FTP)
Check right on folder:
/restore/
/var/upgrade - and maybe i
Some directories do not have write access, please enter the FTP account in the cs-cart settings to configure.
I have got this issue :
WRT_8004
Writer initialization failed [Error opening session output file [/*/diff_zipcode1.out] [error=Permission denied]].
Writer terminating.
The user for informatica has the right to write in this specific folder (I tried a touch it directly and it worked) but I still get this error.
The only way for this workflow to work is to set the writing permission to everyone...
So I was wondering if informatica uses another user than the one who launchs the informatica server like my user on informatica ? And if this is the case how can I set the properties right to write on my folder.
Answer to my situation : I change the settings of the user of informatica after I launched the informatica server so the modification wasn't really done for informatica point of view. To fix this problem, I only had to reboot the informatica server.
Informatica will use whichever user has logged in to Power Center to create the file.
If you do not want to set full permissions to your folder, it would be best if you add the user into a group and provide write permissions to groups only.
Currently, the project we are working on has a freelance front-end developer involved. As we have never used him before we are looking for a way to limit his access to our servers and files but at the same time let him modify the view files currently on these servers.
The current project (all on one server) is compartmentalised into 6 separate mini sites, all using an MVC structure.
e.g.
Mini Site 1
-- Models
-- Views
-- Controllers
Mini Site 2
-- Models
-- Views
-- Controllers
etc
We need to limit his access to each view folder for each project but nothing else.
We are using Amazon EC2 and are using security groups with a limited IP range. We are unable to allow him to use FTP because that opens us up to more potential issues.
Also we have looked at file and group permissions but we have thousands of files on this server alone.
Any ideas on how this can be achieved with as little footprint as possible, so once he leaves we can remove his access and revert the settings etc.?
You could use chmod. I assume that your normal users can sudo and modify files at will? Or are they group based? Here are the two approaches you can pick from.
Approach 1:
If your normal employees/users can use sudo, you can chown all the folders so they are owned by root and a new group called programmers by doing chown -R root:programmers /var/www/dir/ This will make dir and everything in it owned by root and the group programmers. Then you would do chown -R 744 /var/www/dir/ . This will make it so that the root user has R/W/X permissions on dir and all folders in it (that is the 7), users in the programmers group would have Read only permissions (the 4), and all other users would have Read only permissions (the last 4).
From there you would go through and the directories you would want him to have access to you would do: chown -R 774 /var/www/dir/front-end/views/ which would give root and all users in programmers group full R/W/X permissions. If you wanted to do it per file, you could do chown 774 /var/www/dir/front-end/views/index.html
For all other users if they wanted to modify a file (let us say they are using vim), they'd need to do sudo vim /var/www/dir/front-end/views/index.html . This would let them pretend to be root and be able to edit regardless of the Other permission (which is that last 4 in the three digit octal).
Approach 2
If they are group based you could make all files owned by root and the group employees (assuming normal users are in that group). Then for the files that you want him to edit (let use say his username is frontdev), you could do chown -R frontdev:employees /var/www/dir/front-end/views/ and then chmod that directory to 774...and you can do the same for individual files. That way all your employees, including you, in the employees group would have full permissions. Root would have permissions on all files and directories...and then you could assign his user as the one-off user in control of the files/dirs you need him to have access to.
You can also look into jailing the user to only authorized directories. Jailkit is a big one. Here is a good tutorial: https://askubuntu.com/questions/93411/simple-easy-way-to-jail-users