Q: Are "domain limited" Desire2Learn API keys 100% locked to the D2L domain they were issued for, or can they be used in a pinch for work on a different domain -- say, several weeks of testing an upgrade?
Details specific to our case:
My institution is preparing to upgrade our D2L Learning Environment. We have one Production LE and one Dev LE, and we're expecting to get a 2nd Dev LE specifically for upgrade testing (all 3 instances hosted by D2L, fyi).
We have 2 homegrown Valence client apps to test with the upgraded LE. I know that our Valence API keys were issued specifically for our existing (not upgraded) Dev domain. I also know our client app is hard-coded with that key.
But it's not clear to me whether we have to get a new API key and edit our client app accordingly, or whether we can use the existing key on a "wrong" domain for just a few weeks while we're testing the upgrade.
Could such an arrangement be used temporarily?
There are several possible approaches; the one you choose will depend upon your circumstances.
Use another test application's key already granted for the new domain. If you already have an App ID/Key granted for an application limited to your new DEV2 LE, then you can try using that application's credentials temporarily. This would require rebuilding, or reconfiguring, your client application with the new credentials. We do not recommend this approach because for effective testing you definitely want to have traceability as to which application is making which calls to the LE; however if you already have a set of app credentials for a narrowly deployed test application, for example, you can in a pinch switch to sharing those credentials.
Use the LMSID/Key credentials from DEV1 LE on DEV2 LE. The "domain limitation" applied to app keys corresponds to the LMSID/Key credentials assigned to an LE instance at deployment. If your DEV2 instance is only being floated to test integrations in an upgrade scenario and these integrations are already (in their test form) all working against your DEV1 instance, then it may be possible to have your DEV2 LE use the same LMSID/Key credentials as your DEV1 LE. This would mean that the DEV2 LE fetches its known-application credential list from D2L's Key Tool Service, it will get exactly the same list of credentials as given to the DEV1 LE. This is the most radical suggestion, will require D2L's Support Desk to get involved, and will most definitely require shepherding by your DEV2 LE's Approved Support Contact -- this kind of deployment can make sense for certain very specific kinds of testing LMS instances, but it is a very big hammer to apply, so it may not be the right choice here.
Note that this solution is the only one that will work if you have no access to change the application's code/configuration itself (an the app credentials are baked into the app) -- if the app you want to test must work against an LE that acts as if it were the DEV1 instance, then this may be the only solution possible, and in this case you may have to wait until the upgraded LE gets deployed on DEV1 to test your application. I am not at all confident that a granted set of app credentials can be "repointed" to a new domain limitation.
Apply for a new application ID/Key pair, and work to expedite the request. The chief latency in granting application ID/Keys and deploying them lies in having the partner and or account managers for the target LMS Domain in question approve the request: if you line up your partner and/or account manager on your end with the situation and ask them to shepherd the request, this latency can get lowered. This would be the desired choice, because it uses the "proper channels" with the existing business relationship in a way it was intended to be used.
Getting a new set of application credentials for a test app in your new DEV2 domain should not take very long, especially if you already have an existing relationship that's been exercised to get app creds granted through a partner and or account manager. This solution still requires you to change/re-configure your app.
If at all possible, you should take this last path.
Related
my development team has been sparingly trying out Google Cloud Platform for about 10 months. Every member was using the same account to access GCP, say team#example.com. We created three projects under this account.
Starting in about July, we cannot see these projects in the GCP console anymore. Instead, there is one project named My First Project, which we have never created.
However, our original GCP projects still seem to exist, as we can still access for example some of the Google Cloud Functions via HTTP.
Therefore, I have the impression that the connection between our account and the projects has been lost.
OR
A second account with the same name has been accidentally created?
Additional curiosities:
Yesterday I tried to create a Google Cloud Identity account, using team#example.com. It did not work; when entering that address the input field showed an error like "Please use another email address. This is a private Google account." (It was actually in German, so I'm guessing the translation.)
When I go to accounts.google.com, the account selection screen offers team#example.com twice. No matter which entry I choose, I always end up in the GCP console with My First Project.
How can I recover my team's GCP projects?
Which Google support site may I consult to check on the account(s)?
Usually, there is a 1:1 mapping between a certain email address and a Google Account. However, this can be broken under certain situations - for example when creating / deleting / migrating G Suite or Cloud Identity accounts under the domain the email address uses.
If you hit such an edge case, there's not much you can do yourself. Reach out to GCP Support who should be able to resolve the issue for you.
Keep in mind that orphaned resources have a timer on them before they are deleted - so act quickly and do not rely on apps still responding being a sign that they will continue indefinitely.
Each time a Docker image containing a .NET Core MVC web application starts up, all authentication cookies are invalidated, presumably due to a fresh machine key (which is used when signing the cookies) being generated.
This could traditionally be set via the <machineKey/> element in the web.config of a .NET app.
This link suggests that the DataProtection package would fit the bill, but the package seems to require the full fat framework.
What would be the correct way to ensure that every time a Docker image restarts it doesn't invalidate existing auth cookies?
You want to put the keys for data protection into a persistent and shareable location.
If you're on AWS, AspNetCore.DataProtection.Aws allows to put the keyring on S3 with just a few lines of configuration code. Additionally you can leverage AWS KMS to encrypt the keys, which is especially useful to achieve consistent encryption algorithms, allowing to reuse the same key accross different operating systems which have different default encryption algorithms. The KMS option is also part of the same library.
If you're on another platform than AWS, you'll need another library or mount a shared drive. But the concept of sharing the same location for the keys remains the same.
When creating droplets on Digital Ocean using Terraform, the created machines' passwords are sent via mail. If I get the documentation on the Digital Ocean provider right, you can also specify the SSH IDs of of keys to use.
If I am bootstrapping a data center using Terraform, which option should I choose?
Somehow, it feels wrong to have a different password for every machine (somehow using passwords per se feels wrong), but it also feels wrong if every machine is linked to the SSH key of my user.
How do you do that? Is there a way that can be considered good (best?) practice here? Should I create an SSH key pair only for this and commit it with the Terraform files to Git as well? …?
As you mentioned, using passwords on instances is an absolute pain once you have an appreciable number of them. It's also less secure than SSH keys that are properly managed (kept secret). Obviously you are going to have some trouble linking the rest of your automation to some credentials that are delivered out of band to your automation tooling so if you need to actually configure these servers to do something then the password by email option is pretty much out.
I tend to use a different SSH key for each application and development stage (eg. dev, testing/staging, production) but then everything inside that combination gets the same public key put on it for ease of management. Separating it that way means if you have one key compromised you don't need to replace the public key everywhere and so minimises blast radius of this event. It also means you can rotate them independently, especially as some environments may move faster than others.
As a final word of warning, do not put your private SSH key into the same git repo as the rest of your code and definitely do not publish the private SSH key to a public repo. You will probably want to look into some secrets management such as Hashicorp's Vault if you are in a large team or at least distributing these shared private keys out of band if they need to be used by multiple people.
I'm using Amazon Directory Services with a Simple AD instance. I can join computers to the domain, but I can't figure out how to add users to the domain (and do not see in the documentation whether this is even possible).
How do I create a user in Amazon Simple AD?
You can manage users (and groups) via a bound instance's Active Directory Users and Computers tool. Details are here.
Note that due to a bug, this must be done from a Windows Server 2008 R2 instance at the time of writing. Windows Server 2012 is not supported at the time of writing per this post (registration required).
I already have Google authenticator installed in my iPhone and I'm using it to signin to my AWS root account. I want to add the ability to login with MFA using my Android phone as well, using a corresponding token-generator Android app.
Is it possible to add a second device and how exactly? Or is AWS root account MFA bind to one (virtual) device?
🚨 AWS finally provides support for adding additional MFA devices. 🚨
As of November 16, 2022:
https://aws.amazon.com/blogs/security/you-can-now-assign-multiple-mfa-devices-in-iam
I'm leaving the old answer below for reference, but it should no longer be needed.
You can only have one MFA device tied to your root account. You would need to setup a separate IAM user account for your separate device.
From the FAQ:
Q. Can I have multiple authentication devices active for my AWS account?
Yes. Each IAM user can have its own authentication device. However, each identity (IAM user or root account) can be associated with only one authentication device.
Update: So while it's not officially supported, here is one guy who claims he was able to register Google Authenticator on two devices by doing both at the exact same time with the same QR code. Granted he's not doing this with AWS, but it could be worth a try.
https://www.quora.com/Can-Google-Authenticator-be-used-on-multiple-devices
Update 2: I've started using Authy for MFA rather than Google Authenticator. One of the cool things Authy now supports is multi-devices for all your MFA tokens. I currently have my phone and my tablet setup with access to my AWS account using Authy Multi Device.
http://blog.authy.com/multi-device
Here is the solution;
When AWS MFA page shows the barcode, scan barcode from different devices (I've tried with 3) at the same time. They creates same code, filled form with same codes and it works.
This is not really a new answer, but it tries to clarify and to explain a little better (or at least differently) why different virtual devices can be considered to be one virtual device
At the moment (2020-05-07) you cannot have two different authentification devices for the same user. (like more than one of the following: a U2F usb key / a virtual device / a hardware device)
However you can install the same virtual device application on multiple devices (mobile phones / tablets / PCs) if you initialize them all with the same initialisation code (QR code)
The Virtual MFA device is just the implementation of the TOTP algorithm ( https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm )
each TOTP application has to be initialized with a 'secret' code (the QR code)
So if you scan the same QR code with different TOTP apps, then all of these apps can authenticate (they will behave indentical)
When initializing at AWS you are asked to enter two consecutive codes generated by your TOTP app.
(Just enter them from any of the apps, that you initialized with the QR code.
Or if you are really crazy. create one code with one app and then create another code with the other app. just enter the code that was generated first first)
Afterwards all virtual devices will work and are completely interchangable.
You could even 'archive' the QR code image in a safe place and add other virtual devices later (the QR code contains just the secret required to initialize the TOTP application). It does not expire.
From AWS Organizations documentation:
If you choose to use a virtual MFA application, then unlike our
recommendation for the management account root user, for member
accounts you can re-use a single MFA device for multiple member
accounts. You can address geographic limitations by printing and
securely storing the QR code used to configure the account in the
virtual MFA application. Document the QR code's purpose, and seal and
store it in accessible safes across the time zones you operate in,
according to your information security policy. Then, when access is
needed in a different geographic location, the local copy of the QR
code can be retrieved and used to configure a virtual MFA app in the
new location.
I actually tried using the same secret configuration key from AWS on an iPhone, iPad and an Android using Google Authenticator and they all worked fine. The same with what #Jaap did.
In addition to the solutions above:
1) You cannot make a QR-code reappear after attaching an MFA device to AWS account. So if you need to add another virtual MFA device, delete the existing device, reattach it, and make a screenshot of the QR-code (or save Secret code) and then scan this QR-code with another device.
2) The QR-code is not expiring. I could use my code weeks after initialization.
You can export your accounts from Google Authenticator to another device without losing access to them from your current device.
I discovered this when I was upgrading my mobile device and found that my new device would show the exact same MFA codes as my current device at the same time.
On your current MFA device, open Google Authenticator and tap "..." in upper right corner
In the menu, select "Export accounts", then tap "Continue"
You will see a list of accounts, so select the ones you want to enable on the new device and then tap "Export"
You will be shown a QR code, which you then scan from the new device