How to limit a users SSH access to certain folders - amazon-web-services

Currently, the project we are working on has a freelance front-end developer involved. As we have never used him before we are looking for a way to limit his access to our servers and files but at the same time let him modify the view files currently on these servers.
The current project (all on one server) is compartmentalised into 6 separate mini sites, all using an MVC structure.
e.g.
Mini Site 1
-- Models
-- Views
-- Controllers
Mini Site 2
-- Models
-- Views
-- Controllers
etc
We need to limit his access to each view folder for each project but nothing else.
We are using Amazon EC2 and are using security groups with a limited IP range. We are unable to allow him to use FTP because that opens us up to more potential issues.
Also we have looked at file and group permissions but we have thousands of files on this server alone.
Any ideas on how this can be achieved with as little footprint as possible, so once he leaves we can remove his access and revert the settings etc.?

You could use chmod. I assume that your normal users can sudo and modify files at will? Or are they group based? Here are the two approaches you can pick from.
Approach 1:
If your normal employees/users can use sudo, you can chown all the folders so they are owned by root and a new group called programmers by doing chown -R root:programmers /var/www/dir/ This will make dir and everything in it owned by root and the group programmers. Then you would do chown -R 744 /var/www/dir/ . This will make it so that the root user has R/W/X permissions on dir and all folders in it (that is the 7), users in the programmers group would have Read only permissions (the 4), and all other users would have Read only permissions (the last 4).
From there you would go through and the directories you would want him to have access to you would do: chown -R 774 /var/www/dir/front-end/views/ which would give root and all users in programmers group full R/W/X permissions. If you wanted to do it per file, you could do chown 774 /var/www/dir/front-end/views/index.html
For all other users if they wanted to modify a file (let us say they are using vim), they'd need to do sudo vim /var/www/dir/front-end/views/index.html . This would let them pretend to be root and be able to edit regardless of the Other permission (which is that last 4 in the three digit octal).
Approach 2
If they are group based you could make all files owned by root and the group employees (assuming normal users are in that group). Then for the files that you want him to edit (let use say his username is frontdev), you could do chown -R frontdev:employees /var/www/dir/front-end/views/ and then chmod that directory to 774...and you can do the same for individual files. That way all your employees, including you, in the employees group would have full permissions. Root would have permissions on all files and directories...and then you could assign his user as the one-off user in control of the files/dirs you need him to have access to.
You can also look into jailing the user to only authorized directories. Jailkit is a big one. Here is a good tutorial: https://askubuntu.com/questions/93411/simple-easy-way-to-jail-users

Related

Copying objects from one bucket directory folder to another bucket folder using transfer

I'm wanting to use google transfer to copy all folders/files in a specific directory in Bucket-1 to the root directory of Bucket-2.
Have tried to use transfer with the filter option but doesn't copy anything across.
Any pointers on getting this to work within transfer or step by step for functions would be really appreciated.
I reproduced your issue and worked for me using gsutil.
For example:
gsutil cp -r gs://SourceBucketName/example.txt gs://DestinationBucketName
Furthermore, I tried to copy using Transfer option and it also worked. The steps I have done with Transfer option are these:
1 - Create new Transfer Job
Panel: “Select Source”:
2 - Select your source for example Google Cloud Storage bucket
3 - Select your bucket with the data which you want to copy.
4 - On the field “Transfer files with these prefixes” add your data (I used “example.txt”)
Panel “Select destination”:
5 - Select your destination Bucket
Panel “Configure transfer”:
6 - Run now if you want to complete the transfer now.
7 - Press “Create”.
For more information about copy from a bucket to another you can check the official documentation.
So, a few things to consider here:
You have to keep in mind that Google Cloud Storage buckets don’t treat subdirectories the way you would expect. To the bucket it is basically all part of the file name. You can find more information about that in the How Subdirectories Work documentation.
The previous is also the reason why you cannot transfer a file that is inside a “directory” and expect to see only the file’s name appear in the root of your targeted bucket. To give you an example:
If you have a file at gs://my-bucket/my-bucket-subdirectory/myfile.txt, once you transfer it to your second bucket it will still have the subdirectory in its name, so the result will be: gs://my-second-bucket/my-bucket-subdirectory/myfile.txt
This is why, If you are interested in automating this process, you should definitely give the Google Cloud Storage Client Libraries a try.
Additionally, you could also use the GCS Client with Google Cloud Functions. However, I would just suggest this if you really need the Event Triggers offered by GCF. If you just want the transfer to run regularly, for example on a cron job, you could still use the GCS Client somewhere other than a Cloud Function.
The Cloud Storage Tutorial might give you a good example of how to handle Storage events.
Also, on your future posts, try to provide as much relevant information as possible. For this post, as an example, it would’ve been nice to know what file structure you have on your buckets and what you have been getting as an output. And If you can provide straight away what’s your use case, it will also prevent other users from suggesting solutions that don’t apply to your needs.
try this in Cloud Shell in the project
gsutil cp -r gs://bucket1/foldername gs://bucket2

AWS S3 bucket clean up but save certain number of folders

So, currently inside an S3 bucket, I store the javascript bundle file outputted from webpack. Here is a sample folder structure
- s3_bucket_name
- javascript_bundle
- 2018_10_11
- 2018_10_09
- 2018_10_08
- 2018_10_07
- 2018_10_06
- 2018_10_05
So I want to clean up the folders and only save 5 folders. (the folder name are date of deployment) I am unable to clean up by the date since it's possible we may not deploy for a long time.
Because of this, I am unable to use lifecycle methods.
For example, if I set the expiration date to 30 days, S3 will automatically remove all the folders if we don't deploy for 30 days, then all the javascript file will be removed and the site won't work.
Is there a way to accomplish this using AWS CLI?
The requirements are
Delete folder by date
Keep a minimum of 5 folders
For example, given the following folders and we want to delete folders older than 30 days while keeping at least 5 folders
- 2018_10_11
- 2018_09_09
- 2018_08_08
- 2018_07_07
- 2018_06_06
- 2018_05_05
The only one that will be deleted is 2018_05_05.
I don't see any options to do this via aws s3 rm command.
You can specify which folders to delete, but there is no option in the AWS CLI to specify which folders you do not want to delete.
This requirement would best be solved by writing a script (eg in Python) that can retrieve a list of the Bucket contents and then apply some logic to which objects should be deleted.
In Python, using boto3, list_objects_v2() can return a list of CommonPrefixes, which is effectively a list of folders. You could then determine which folders should be kept and then delete the objects in all other paths.

wso2 api-m running i docker as non root user

I am looking into running the wso2-am in openshift.
I am trying to run AM but it keeps failing because missing permission to write to the file system.
Unable to create the directory
[/opt/wso2/wso2am-2.1.0/repository/deployment/server/webapps/am#sample#calculator#v1]
Unable to create the directory
[/opt/wso2/wso2am-2.1.0/repository/deployment/server/webapps/authenticationendpoint]
All examples I see the container is running as root but we want to avoid that and run it as USER 1010.
Can you set a value to make it write to a specified location.
Running it as user with uid 1010 will not help either. You need to set up file system permissions so that directories and files you need to write to have group root and are writable by group.
This is necessary because by default under OpenShift your application will run as an assigned uid unique to your project. This is outside of the range of what would be in the /etc/passwd file and you cannot predict what it will be in advance. Because it isn't in /etc/passwd then it falls back to running as group root, thus why you need to satisfy the requirement of file system permissions being group root and writable by group.

Add Network Location to My Computer (Group Policy)

The shares at my company are becoming unwieldy and we have now officially ran out of letters to map shares to having exhausted A, B, H-Z. Not all of our users need access to some of these shares, but there are enough people who need access to enough different shares that we can't simply recycle letters for them which are used by other shares. At this point we're going to need to start moving shares over to network locations.
Adding a network location shortcut on My Computer isn't difficult, I right click and use the Wizard, but how do I do it through Group Policy? I don't want to have to set up 100 or so computers manually
This absolutely can be done using only existing Group Policy preferences, but it's a little tedious.
Background Info
When you create a network location shortcut it actually creates three things.
A read-only folder with the name of your network shortcut
A target.lnk within that folder with your destination
A desktop.ini file that contains the following
[.ShellClassInfo]
CLSID2={0AFACED1-E828-11D1-9187-B532F1E9575D}
Flags=2
I found this information on this Spiceworks community forum post.
How to make it happen
I figured out how to do this from a comment in the same forum post linked above.
You need to create four settings in a group policy. All of the settings are located in the group policy editor under: User Configuration>Preferences>Windows Settings
as seen in this image.
Folders Setting
Add a new folder with preference with the following settings as seen in this image.
Path: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME
Read-only checked
Ini Files Settings
There are two setting that you must make in this setting, as seen in this image.
Create one for the CLSID2 settings image
File Path: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME\desktop.ini
Section Name: .ShellClassInfo
Property Name: CLSID2
Property Value: {0AFACED1-E828-11D1-9187-B532F1E9575D}
And another for the Flags setting image
File Path: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME\desktop.ini
Section Name: .ShellClassInfo
Property Name: Flags
Property Value: 2
Shortcuts Setting
Add a new shortcut preference with the following settings image
Name: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME\target
Target type: File System Object
Location: <Specify full path>
Target path: SHARETARGET
Closing Notes
This will work to create the network location using group policy. I would recommend using item level targeting to keep all of your network locations in one group policy.
It can be a handful to manage all of these separate preferences, so I created an application to help with managing the shares, and the user security group filters. Here is my application on github, you must create the first share using the settings above, but the application can handle adding more shares, deleting shares, and updating existing shares.
You can make a bat script which you can add to startup policy to run:
net use <driver letter> \\<servername>\<sharename> /user:<username> <password>
Example:
#echo off
net use w: \\server /user:Test TestPassword
And this will add on every computer a network shortcut to \\server with letter W .
And you can modify to make some this only on some computers or users.
Let's say you want only on user 'MikeS' to run this command, so you put something like that:
IF %USERNAME% == 'MikeS'(
net use w: \\server /user:Test TestPassword
)

Different root and sub-folder permissions for the same user - SharePoint 2013

Recently I've got a task for creating document management system for my company using the SharePoint 2013.
I'm still new in this area and maybe that's the reason I cannot solve my problem with the permissions.
So,I have one root folder and couple of sub folders with other sub folders in it.
Structure Example:
ROOT Folder
-----> A folder
--------> A1 folder
---------> A1.1 folder
---------> A1.2 folder
-------->A2 folder
----->B folder
--------> B1 folder
---------> B1.1 folder
---------> B1.2 folder
-------->B2 folder
etc..
Is it possible to make same user to have different permissions for (lets say) folder A1 (only read,view) and for sub-folder A1.1 and A1.2 (Edit/update)?
Thanks for the time.
Yes, you can. Normally permissions are inherited. For example , if user A has Read permission on folder, all the subfolders will have same permission automatically.
However, you can break this inheritance and you can configure individual permissions at each folder or even at each item as well.
At the same time, you should not break this inheritance too often. Think of having separate site, subsite or library altogether for this purpose.
See these article for best practices:
http://technet.microsoft.com/en-us/library/gg128955%28v=office.15%29.aspx
http://technet.microsoft.com/en-us/library/dn169567%28v=office.15%29.aspx