Is there a way to let CloudFormer (beta) keep user data in launch configuration - amazon-web-services

I tried out CloudFormer(beta), the AWS tool that generates a CloudFormation template from selected existing infrastructure. CloudFormer is run as a separate stack, which creates an instance. You create the stack using the CloudFormer template and then log in on that instance using credentials you filled in when creating the stack. As described in https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html.
This all worked smoothly. I got into the CloudFormer wizard and was able to easily click through and select all my resources, which consisted of a VPC, with an autoscaling group that runs a simple web app that's connected to S3, RDS and DynamoDB. The excercise is based on Ryan Lewis' excellent Pluralsight course for AWS (source code). When I ran the resulting CloudFormation template, I ran into just one issue: I had to change single occurence of AWS::RDS::DBSecurityGroup to AWS::EC2::SecurityGroup, because the former doesn't seem to be accepted. Then my stack was created successfully.
However, the app was not running. A quick inspection showed that the user data was missing from the launch configuration. So it seems that CloudFormer just skips that when creating the template for the launch config. That's slightly strange to me, as the user data is what makes launch configs useful. Did you experience the same issue and is there maybe a workaround?
For completeness sake, here is the relevant part of the CloudFormation template that was generated:
"lcpizzalauncherdyn4": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"AssociatePublicIpAddress": true,
"ImageId": "ami-0661a53fb3b1e117a",
"InstanceType": "t2.micro",
"KeyName": "pizza-keys",
"IamInstanceProfile": "pizza-ec2-role",
"SecurityGroups": [
{
"Ref": "sgpizzaec2sg"
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"SnapshotId": "snap-0cad60faa1e33e22b",
"VolumeSize": 8
}
}
]
}
},
See the full file on gitHub
Seems sad if the CloudFormer tool is not usable for creating a fully working template because of a little omission like this. Now I do get that it is a beta, so I tried finding a place to report an issue. Do you know what is the preferred way to let the AWS team know?

CloudFormer has been in beta since 2011. It does not appear to have been maintained much lately, so it might be deprecated in future.
So, it looks like you'll need to add the User Data section manually.

Related

How to use GCP Run services built in continuous deployment with terraforrm?

I originally deployed several GCP Run services by using a target image via terraform:
resource "google_cloud_run_service" "my-run-service" {
name = "my-run-service"
location = "us-central1"
template {
spec {
containers {
image = "us.gcr.io/project-name/image-name"
...
}
...
}
}
Later on, I switched to using the integrated continuous deployment option from GCP Run, which worked seamlessly.
I was thinking upgrading my terraform file would be straightforward by comparing the required changes on the next apply, but when doing it, I can't see anything that refers the CD.
Does google_cloud_run_service supports this? I can't seem to find anything in the doc.
If not, what's the alternative? So far I had to stop managing those resources from terraform, which is really not ideal.

Google Cloud Function deployment failure

I am new to Google Cloud Platform, and I am trying Google Cloud Functions but it is showing really strange behavior. I'm trying to run the following code:
exports.helloPubSub = (event, context) => {
const pubsubMessage = event.data;
console.log(event.data.attributes);
console.log(Buffer.from(pubsubMessage, 'base64').toString());
};
But when I click to create a function, the deployment fails and hardly after trying 8-9 times it gets deployed. The error which it throws is
Deployment failure:
Build failed: {"cacheStats": [{"status": "MISS", "hash": "0bb4aa23414dd82b8643cb2c86b5a55af031b22701fbe364a88ea6e61ad481a4", "type": "docker_layer_cache", "level": "global"}, {"status": "HIT", "hash": "0bb4aa23414dd82b8643cb2c86b5a55af031b22701fbe364a88ea6e61ad481a4", "type": "docker_layer_cache", "level": "project"}]}
Is it a bug in Google Cloud Platform or am I doing something wrong? If I am doing something wrong, then I shouldn't be able to deploy it even once.
Any help would really be appreciated. Thanks a lot!
If we look at the Google Cloud Status, we find that there is an outage with Cloud Function deployment at this time (the time of your post). I can't seem anything wrong with what you have coded and strongly believe that you are being impacted by the outage.
References
Google Cloud Status Dashboard
Cloud Function outage

Bitbucket to source repo mirroring using terraform

I am trying to automate the project & resources creation, along with automating the triggers for cloud build using terraform. To use cloud build triggers I will have to mirror the bit-bucket repo into source repo of GCP.
I am using the below to create a source project
https://www.terraform.io/docs/providers/google/r/cloudbuild_trigger.html, but there is no option to set mirror.
Upon digging the APIs of GCP (https://cloud.google.com/source-repositories/docs/reference/rest/v1/projects.repos/create), I can see a mirrorConfig option but the docs says it is in read-only mode. When I set the mirrorConfig for the API I get the below error.
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "repo.mirror_config",
"description": "mirror_config is a read-only field and must not be set"
}
]
}
Is there a way to automate repo mirroring from bit-bucket to source repository in GCP using terraform? If not is there any alternate way/tool for achieving this?
As you mentioned (and as stated in the documentation) the “mirrorConfig” field is currently set to read-only, so it is not possible to set any values for it manually. Subsequently, you received the aforementioned error with the corresponding message description.
Setting up the mirror requires additional information, since Cloud Source Repositories needs authorization from Bitbucket, an action which is not exposed in the SourceRepo API.
“mirrorConfig” is a read-only at creation because this additional required information, when using the Cloud Console, is provided by relying on the user to login to both Cloud and Bitbucket sites from the same browser session. However, the API doesn't have capabilities to handle this.
It seems that currently it is not possible to mirror the repository via the API. To automate the creation of a mirrored repository there is no workaround other than using the UI, hence you will have to connect to the external sources through the Cloud Console, like explained in the Mirroring a Bitbucket repository documentation.
However, during my investigation I came across a Public Issue regarding this, but referring to GitHub. You may add comments on this Public Issue, to include the Feature for Bitbucket as well and also “star” it so that it receives more visibility and so that you may receive further updates on it.
I hope this information helps.

How to set params for file-exists-behavior for AWS CodeDeploy on Bitbucket

Atlassian Bitbucket Support for AWS CodeDeploy was announced long time ago in 2015.
AWS CodeDeploy User Guide
is explaining what exactly is executed on the instance to generate codeDeploy deployment.
my question is how do we set a param for
--file-exists-behavior
I wan it to be OVERWRITE, but it feels like it is DISALLOW by default.
I know it is possible, because this is how it worked on elstic-beanstalk (Amazon Linux) on another project, however now I'm using Ubuntu and I don't have access to previous settings. It cannot be possible only for Amazon Linux, right?
I know this was asked a long time ago, but I ran into this issue myself, so here's a fix for those still struggling with bitbucket and aws codedeploy:
Go to the file: codedeploy_deploy.py and change the call to the create_deployment and add the option fileExistsBehavior='OVERWRITE'. It should end up like this:
response = client.create_deployment(
applicationName=str(os.getenv('APPLICATION_NAME')),
deploymentGroupName=str(os.getenv('DEPLOYMENT_GROUP_NAME')),
revision={
'revisionType': 'S3',
's3Location': {
'bucket': os.getenv('S3_BUCKET'),
'key': BUCKET_KEY,
'bundleType': 'zip'
}
},
deploymentConfigName=str(os.getenv('DEPLOYMENT_CONFIG')),
description='New deployment from BitBucket',
ignoreApplicationStopFailures=True,
fileExistsBehavior='OVERWRITE'
)
I had to upgrade boto3 from 1.3.0 to the current one (1.9.201)

Can't schedule azure webjob

I'm not being able to publish a scheduled WebJob to Azure App Service. I'm using Visual Studio 2017.
With this settings all works fine:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "WebJobName",
"runMode": "OnDemand"
}
But when I set this settings:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "WebJobName",
"startTime": "2017-03-17T07:00:00+00:00",
"endTime": "2027-03-17T07:00:00+00:00",
"jobRecurrenceFrequency": "Day",
"interval": 1,
"runMode": "Scheduled"
}
Visual Studio 2017 crashes at the "Creating the scheduler job" step.
I can't find how to schedule this job, I'm using the package Microsoft.Web.WebJobs.Publish 1.0.13
Can anyone help me?
Thanks
The feature where VS configures the Azure Scheduler has many issues, and is on the way to deprecation. Instead, the suggested approach is to rely on the CRON feature described here.
As an aside, if you want to get the scheduler working and not move to CRON, one thing you should do is upgrade to the latest version on the WebJobs NuGet package, which should solve this particular issue.