How do i pass AWS credentials using a process variable containing an AWS service endpoint id - amazon-web-services

I want to use two variables, $(Aws.Endpoint) and $(Aws.Region), in my AWS-related release tasks, and provide values for those as process variables.
Aws.Endpoint is the id of an aws service endpoint in VSTS. When i do this, i get
Endpoint auth data not present: ...
Has anyone who ran into this seemingly trivial issue found a solution? Otherwise i need to define the aws endpoint directly in the task, which feels wrong, because i eventually want the release tasks to be part of a task group, shared by all the environment making up the pipeline (dev, stage, prod).
Note: i see there is no stackoverflow tag for AWS Tools for Visual Studio Team Services, and i don't have the reputation to create a new tag. If someone with enough reputation could create something like aws-tools-for-vsts (homepage), that would be grand.

No, you can’t do it for the tasks in AWS tools for Microsoft Visual Studio Team Services extension. You can custom build/release task to meet your requirement through VSTS extension.
If you want to get AWS endpoint in your custom build/release task, you can get the endpoint through Get-VstsEndpoint or task.getEndpointAuthorization with the GUID of service (can get in build/release log: ##[debug]awsCredentials=9a3009d2-35f3-4954-a8fa-34c3313c34f6)
For example:
$awsEndpoint = Get-VstsEndpoint -Name [GUID of service] -Require
Write-Host $awsEndpoint
foreach($p in $awsEndpoint.Auth.Parameters){
Write-Host $p
}

Related

How to achieve multiple gcs backends in terraform

Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?
Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.

AWS Amplify environment 'dev' not found

I'm working with AWS Amplify, specifically following this tutorial AWS-Hands-On-Tutorial.
I'm getting a build failure when I try to deploy the application.
So far I have tried creating multiple backend environments and connecting them with the frontend, hoping that this would alleviate the issue. The error message leads me to believe that the deploy is not set up to also detect the backend environment, despite that I have it set to do so.
Also, I have tried changing the environment that is set to deploy with the frontend by creating another develop branch to see if that is the issue.
I've had no success with trying any of these, the build continues to fail. I have also tried running the 'amplify env add' command as the error message states. I have not however tried "restoring its definition in your team-provider-info.json" as I'm not sure what that entails and can't find any information on it. Regardless, I would think creating a new environment would solve the potential issues there, and it didn't. Any help is appreciated.
Due to the documentation being out of date, I completed the steps below to resolve this issue:
Under Build Settings > Add package version override for Amplify CLI and leave it as 'latest'
When the tutorial advises to "update your front end branch to point to the backend environment you just created. Under the branch name, choose Edit...", where the tutorial advises to use 'dev' it actually had us setup 'staging', choose that instead.
Lastly, we need to setup a 'Service Role' under General. Select General > Edit > Create New Service Role > Select the default options and save the role, it should have a name of amplifyconsole-backend-role. Once the role is saved, you can go back to General > Edit > Select your role from the dropdown, if it doesn't show by default start typing it in.
After completing these steps, I was able to successfully redeploy my build and get it pushed to prod with authentication working. Hope it helps anyone who is running into this issue on Module 3 of the AWS Amplify Starter Tutorial!

don't want to login google cloud with service account

I am new at google cloud and this is my first experience with this platform. ( Before I was using Azure )
So I am working on a c# project and the project has a requirement to save images online and for that, I created cloud storage.
not for using the services, I find our that I have to download a service account credential file and set the path of that file in the environment variable.
Which is good and working file
RxStorageClient = StorageClient.Create();
But the problem is that. my whole project is a collection of 27 different projects and that all are in the same solution and there are multi-cloud storage account involved also I want to use them with docker.
So I was wondering. is there any alternative to this service account system? like API key or connection string like Azure provides?
Because I saw this initialization function have some other options to authenticate. but didn't saw any example
RxStorageClient = StorageClient.Create();
Can anyone please provide a proper example to connect with cloud storage services without this service account file system
You can do this instead of relying on the environment variable by downloading credential files for each project you need to access.
So for example, if you have three projects that you want to access storage on, then you'd need code paths that initialize the StorageClient with the appropriate service account key from each of those projects.
StorageClient.Create() can take an optional GoogleCredential() object to authorize it (if you don't specify, it grabs the default application credentials, which, one way to set is that GOOGLE_APPLICATION_CREDENTIALS env var).
So on GoogleCredential, check out the FromFile(String) static call, where the String is the path to the service account JSON file.
There are no examples. Service accounts are absolutely required, even if hidden from view, to deal with Google Cloud products. They're part of the IAM system for authenticating and authorizing various pieces of software for use with various products. I strongly suggest that you become familiar with the mechanisms of providing a service account to a given program. For code running outside of Google Cloud compute and serverless products, the current preferred solution involves using environment variables to point to files that contain credentials. For code running Google (like Cloud Run, Compute Engine, Cloud Functions), it's possible to provide service accounts by configuration so that the code doesn't need to do anything special.

GCP Deployment Manager - What Dev Ops Tool To Use In Conjunction?

I'm presently looking into GCP's Deployment Manager to deploy new projects, VMs and Cloud Storage buckets.
We need a web front end that authenticated users can connect to in order to deploy the required infrastructure, though I'm not sure what Dev Ops tools are recommended to work with this system. We have an instance of Jenkins and Octopus Deploy, though I see on Google's Configuration Management page (https://cloud.google.com/solutions/configuration-management) they suggest other tools like Ansible, Chef, Puppet and Saltstack.
I'm supposing that through one of these I can update something simple like a name variable in the config.yaml file and deploy a project.
Could I also ensure a chosen name for a project, VM or Cloud Storage bucket fits with a specific naming convention with one of these systems?
Which system do others use and why?
I use Deployment Manager, as all 3rd party tools are reliant upon the presence of GCP APIs, as well as trusting that those APIs are in line with the actual functionality of the underlying GCP tech.
GCP is decidedly behind the curve on API development, which means that even if you wanted to use TF or whatever, at some point you're going to be stuck inside the SDK, anyway. So that's why I went with Deployment Manager, as much as I wanted to have my whole infra/app deployment use other tools that I was more comfortable with.
To specifically answer your question about validating naming schema, what you would probably want to do is write a wrapper script that uses the gcloud deployment-manager subcommand. Do your validation in the wrapper script, then run the gcloud deployment-manager stuff.
Word of warning about Deployment Manager: it makes troubleshooting very difficult. Very often it will obscure the error that can help you actually establish the root cause of a problem. I can't tell you how many times somebody in my office has shouted "UGGH! Shut UP with your Error 400!" I hope that Google takes note from my pointed survey feedback and refactors DM to pass the original error through.
Anyway, hope this helps. GCP has come a long way, but they've still got work to do.

Bind a service to two different app spaces in cloudfoundry

Is it possible to bind a service (i.e. MariaDB) to apps in different spaces? How can I achieve it if I want to use the same database for two different spaces?
Currently we don't support service instance sharing. We made the necessary code changes already and tested it (Service Broker), but don't roll it out on prd because the feature is at the moment beta.
Sharing a service instance between spaces allows apps in different
spaces to share databases, messaging queues, and other types of
services. This eliminates the need for development teams to use
service keys and user-provided services to bind their apps to the same
service instance that was provisioned using the cf create-service
command. Sharing service instances improves security, auditing, and
provides a more intuitive user experience.
See this discussion for more info when this feature will be generally available from upstream.
I tried the solution from https://docs-cloudfoundry-staging.cfapps.io/devguide/services/sharing-instances.html.
If I run the first command I get the following error:
$ cf enable-feature-flag service_instance_sharing
Server error, status code: 403, error code: 10003, message: You are not authorized to perform the requested action
The second command works and now I can see the service in the space B on the dashboard.
$ cf share-service SERVICE-INSTANCE -s OTHER-SPACE [-o OTHER-ORG]
Note: if I click on the service on the dashboard it says: This is a shared service. It is accessible only in the space it was shared from. The service is also shown greyed out.
you can same service instance in two different spaces/orgs
follow:
1) https://docs.pivotal.io/pivotalcf/2-3/services/enable-sharing.html
2) https://docs.pivotal.io/pivotalcf/2-3/devguide/services/sharing-instances.html