AWS Cognito - how to create a backup? - amazon-web-services

We are currently moving our Auth services to AWS Cognito. As it's crucial to have the user profiles + data safe, we need to have a backup of the main user pool. We've noticed that there is an option to Import Users via a .csv file with the headers equal to the pool attributes but there is no option to create the .csv automatically. Does anyone know of a solution which automatically generates such file?The point is to protect the user profiles of accidental delete of the whole user pool (by accident, let's say a tired developer on Friday night)? I've personally tried to implement a workaround solution by doing all manual work (getting headers, users, mapping them and creating the csv) but that is not very reliable.

I know I am late to the party but leaving this here for future searches.
I too faced the same issue while working with Cognito and thus made a tool to take backups and restore them to userpools.
You can find it here: https://www.npmjs.com/package/cognito-backup-restore
This can be used via cli or using imports (incase you want to write your own wrapper or script).
Please suggest any improvements: https://github.com/rahulpsd18/cognito-backup-restore
This is still under development, as I plan to make use of Cognito User Pool Import Job instead of using aws-sdk's adminCreateUser to create users while restoring to improve upon the current implementation. But it works fine for now.
Cross-Region Cognito Replication will be implemented too once I fine tune the restore process.

Amazon has released a Cognito User Profiles Export Reference Architecture for exporting/importing users from a user pool. There are limitations:
Passwords not backed up; users will need to reset
Pools using MFA are not supported
Cognito sub attributes will be new, so if the system depends on them, they need to be copied to a custom user attribute
Federated users also pose challenges WRT sub
Advanced security - no user history is exported
No support for pools that allow the option of either phone or email usernames
No support for tracked devices

I also created a tool for this, which also supports backing up and restoring groups and relations to users:
https://github.com/mifi/cognito-backup
You can install it like this:
npm i -g cognito-backup
and use it like this:
cognito-backup backup-users eu-west-1_1_12345
cognito-backup backup-groups eu-west-1_1_12345
cognito-backup backup-all-users eu-west-1_1_12345
cognito-backup restore-groups eu-west-1_12345
cognito-backup restore-users eu-west-1_12345 Abcd.1234 --file eu-west-1_12345.json
Note that password cannot be backed up due to an AWS limitation.

To prevent accidental pool deletion you could create a Service Control Policy at the org level.

Related

Auto-generate AWS credentials file from AWS SSO?

I use AWS Single Sign-ON (SSO) to get programmatic access keys to our various AWS accounts (production, staging, dev, etc) that I can then use on the command line.
I often need to hop between multiple environments so I need to manually add several sets of credentials to my /.aws/credentials file one at a time from the SSO page.
This isn't the biggest problem but is inconvenient/irritating as it takes time; has to be done a few times a day as the tokens expire; and the profile name included on the individual ~/.aws/credentials snippet has to be manually changed to the account name (e.g. [dev]) rather than the account number and SSO identity that AWS includes by default (e.g. [123456789012_AWSReadOnlyAccess]) so it works with our other tools (in this case Terraform workspaces).
I'd like a way to autogenerate user-friendly content for my ~/.aws/credentials easily covering all the SSO accounts I use day to day.
Is there such a facility/tool/script?
I couldn't find an existing example of something I could use to do this, so I put together this gist which is a bookmarklet that adds a button the AWS SSO landing page that, when clicked, will generate the ~/.aws/credentials content and copy it to the clipboard ready to use!
https://gist.github.com/bennyrw/4c6b18221611332605ea91474ae04f10
I hope it helps someone with the same problem I had :)

AWS CodeCommit prevent merge until successful build

I'm using an AWS Lambda function to kick off a build in AWS CodeBuild when a Pull Request is created or updated in AWS CodeComimit, which is working well.
However, I'd like to be able to prevent the merging of that Pull Request in to the master branch of the repository, until the latest build for that PR has completed successfully.
Does anyone know if there's a way that can be done in AWS? E.g. so that the Merge button is disabled or not available, like when not enough approvers have been obtained?
I was looking into this myself and from what I understand, it is currently not possible to directly create this rule, but I think it should be doable with a different approach.
Instead of requiring a custom rule that disables merging (which doesn't exist today), you could make it so that the PR requires review from a specific IAM user. With that, you could probably use a fixed "build" user, and fire an automatic approval request for the PR once the build finishes successfully. This will in turn "approve" that rule in the PR and allow it to be merged after the build succeeds.
Since approval can be done via the CLI interface, I'm sure it should also be possible via API. For example, you could use this API to automatically mark any given PR as approved by the calling user, then ensure the service that is calling it is the same user registered in the "build" approval template.
Besides the HTTP WebApi, there are also other ways to call into these CodeCommit actions, like the AWS SDK library (C# example: https://www.nuget.org/packages/AWSSDK.CodeCommit/).

don't want to login google cloud with service account

I am new at google cloud and this is my first experience with this platform. ( Before I was using Azure )
So I am working on a c# project and the project has a requirement to save images online and for that, I created cloud storage.
not for using the services, I find our that I have to download a service account credential file and set the path of that file in the environment variable.
Which is good and working file
RxStorageClient = StorageClient.Create();
But the problem is that. my whole project is a collection of 27 different projects and that all are in the same solution and there are multi-cloud storage account involved also I want to use them with docker.
So I was wondering. is there any alternative to this service account system? like API key or connection string like Azure provides?
Because I saw this initialization function have some other options to authenticate. but didn't saw any example
RxStorageClient = StorageClient.Create();
Can anyone please provide a proper example to connect with cloud storage services without this service account file system
You can do this instead of relying on the environment variable by downloading credential files for each project you need to access.
So for example, if you have three projects that you want to access storage on, then you'd need code paths that initialize the StorageClient with the appropriate service account key from each of those projects.
StorageClient.Create() can take an optional GoogleCredential() object to authorize it (if you don't specify, it grabs the default application credentials, which, one way to set is that GOOGLE_APPLICATION_CREDENTIALS env var).
So on GoogleCredential, check out the FromFile(String) static call, where the String is the path to the service account JSON file.
There are no examples. Service accounts are absolutely required, even if hidden from view, to deal with Google Cloud products. They're part of the IAM system for authenticating and authorizing various pieces of software for use with various products. I strongly suggest that you become familiar with the mechanisms of providing a service account to a given program. For code running outside of Google Cloud compute and serverless products, the current preferred solution involves using environment variables to point to files that contain credentials. For code running Google (like Cloud Run, Compute Engine, Cloud Functions), it's possible to provide service accounts by configuration so that the code doesn't need to do anything special.

How to add more devices to AWS root account MFA

I already have Google authenticator installed in my iPhone and I'm using it to signin to my AWS root account. I want to add the ability to login with MFA using my Android phone as well, using a corresponding token-generator Android app.
Is it possible to add a second device and how exactly? Or is AWS root account MFA bind to one (virtual) device?
🚨 AWS finally provides support for adding additional MFA devices. 🚨
As of November 16, 2022:
https://aws.amazon.com/blogs/security/you-can-now-assign-multiple-mfa-devices-in-iam
I'm leaving the old answer below for reference, but it should no longer be needed.
You can only have one MFA device tied to your root account. You would need to setup a separate IAM user account for your separate device.
From the FAQ:
Q. Can I have multiple authentication devices active for my AWS account?
Yes. Each IAM user can have its own authentication device. However, each identity (IAM user or root account) can be associated with only one authentication device.
Update: So while it's not officially supported, here is one guy who claims he was able to register Google Authenticator on two devices by doing both at the exact same time with the same QR code. Granted he's not doing this with AWS, but it could be worth a try.
https://www.quora.com/Can-Google-Authenticator-be-used-on-multiple-devices
Update 2: I've started using Authy for MFA rather than Google Authenticator. One of the cool things Authy now supports is multi-devices for all your MFA tokens. I currently have my phone and my tablet setup with access to my AWS account using Authy Multi Device.
http://blog.authy.com/multi-device
Here is the solution;
When AWS MFA page shows the barcode, scan barcode from different devices (I've tried with 3) at the same time. They creates same code, filled form with same codes and it works.
This is not really a new answer, but it tries to clarify and to explain a little better (or at least differently) why different virtual devices can be considered to be one virtual device
At the moment (2020-05-07) you cannot have two different authentification devices for the same user. (like more than one of the following: a U2F usb key / a virtual device / a hardware device)
However you can install the same virtual device application on multiple devices (mobile phones / tablets / PCs) if you initialize them all with the same initialisation code (QR code)
The Virtual MFA device is just the implementation of the TOTP algorithm ( https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm )
each TOTP application has to be initialized with a 'secret' code (the QR code)
So if you scan the same QR code with different TOTP apps, then all of these apps can authenticate (they will behave indentical)
When initializing at AWS you are asked to enter two consecutive codes generated by your TOTP app.
(Just enter them from any of the apps, that you initialized with the QR code.
Or if you are really crazy. create one code with one app and then create another code with the other app. just enter the code that was generated first first)
Afterwards all virtual devices will work and are completely interchangable.
You could even 'archive' the QR code image in a safe place and add other virtual devices later (the QR code contains just the secret required to initialize the TOTP application). It does not expire.
From AWS Organizations documentation:
If you choose to use a virtual MFA application, then unlike our
recommendation for the management account root user, for member
accounts you can re-use a single MFA device for multiple member
accounts. You can address geographic limitations by printing and
securely storing the QR code used to configure the account in the
virtual MFA application. Document the QR code's purpose, and seal and
store it in accessible safes across the time zones you operate in,
according to your information security policy. Then, when access is
needed in a different geographic location, the local copy of the QR
code can be retrieved and used to configure a virtual MFA app in the
new location.
I actually tried using the same secret configuration key from AWS on an iPhone, iPad and an Android using Google Authenticator and they all worked fine. The same with what #Jaap did.
In addition to the solutions above:
1) You cannot make a QR-code reappear after attaching an MFA device to AWS account. So if you need to add another virtual MFA device, delete the existing device, reattach it, and make a screenshot of the QR-code (or save Secret code) and then scan this QR-code with another device.
2) The QR-code is not expiring. I could use my code weeks after initialization.
You can export your accounts from Google Authenticator to another device without losing access to them from your current device.
I discovered this when I was upgrading my mobile device and found that my new device would show the exact same MFA codes as my current device at the same time.
On your current MFA device, open Google Authenticator and tap "..." in upper right corner
In the menu, select "Export accounts", then tap "Continue"
You will see a list of accounts, so select the ones you want to enable on the new device and then tap "Export"
You will be shown a QR code, which you then scan from the new device

Mercurial-server add permissions to my repository

I'm using mercurial-server to manage my repositories in the enterprise server. I created a repository for each user and I wanted each of them could give access to another, ie, each user would have access control to your projects in your repository. But in mercurial-server documentation I see that only administrators can give that kind of access.
Is that way how it works or gives to circumvent it somehow through the mercurial-server or even own mercurial(hg)?
If you want to delegate the access rights management to your users, they would need to have access to the /hgadmin repository and they should be able to modify the /hgadmin/access.conf file where the fine grained access control is located.
To my knowledge there is no way (yet) to use Mercurial-server to have silos of access-control, where a user could grant access to his/her own repository but not to other's repositories. However you should be able to develop such an extension to the system: with a hook that would extract relevant rights from, e.g., <user-repo>/admin/access.conf and copy them in a zone where another hook or a cron would select only lines concerning the <user-repo> zone (with a sed or perl or whatever you'd like), then update the real access.conf file, and finally commit and push it.
Hope it'll help.