Custom Opsworks layer fails while deploying new Postgresql server - amazon-web-services

I'm following this guide to create a custom Postgresql layer within Opsworks to build a server for my Ruby on Rails app. I'm using this custom JSON as provided in the blog post:
{
"postgresql" : {
"password" : {
"postgres" : "unhackablepassword"
},
"contrib" : {
"packages" : ["postgresql-contrib-9.2"],
"extensions" : ["hstore"]
}
}
}
The following custom cookbooks are used (git://github.com/growthrepublic/cookbooks.git)
postgresql::contrib
postgresql::ruby
postgresql::server
postgresql
The instance setup fails with this error message:
[2014-01-08T20:36:49+00:00] FATAL: Chef::Exceptions::Package: package[postgresql-contrib-9.2] (postgresql::contrib line 24) had an error: Chef::Exceptions::Package: No version specified, and no candidate version available for postgresql-contrib-9.2
I'm new to Chef and Opsworks, does any one have any idea why it's failing?
Thanks!
Francis

I've had success by adding the OS Package: postgresql-server-dev-9.3

Related

EB throwing nodejs version error during deployment

I'm trying to deploy a nodejs app on elastic beanstalk. I create the environment with nodejs 10, matching the nodejs version in my package.json. The deployment fails, and the logs show
An error occurred during execution of command [app-deploy] - [Install customer specified node.js version]. Stop running the command. Error: unsupported node version v10.23.1, please specify any of node versions in [v10.0.0 v10.1.0 v10.10.0 v10.11.0 v10.12.0 v10.13.0 v10.14.0 v10.14.1 v10.14.2 v10.15.0 v10.15.1 v10.15.2 v10.15.3 v10.16.0 v10.16.1 v10.16.2 v10.16.3 v10.17.0 v10.18.0 v10.18.1 v10.19.0 v10.2.0 v10.2.1 v10.20.0 v10.20.1 v10.21.0 v10.22.0 v10.22.1 v10.23.0 v10.23.1 v10.23.2 v10.23.3 v10.24.0 v10.3.0 v10.4.0 v10.4.1 v10.5.0 v10.6.0 v10.7.0 v10.8.0 v10.9.0]
My package.json include:
"engines": {
"node": "v10.23.1", # I've tried 10.23.1, ^10.23.1, etc
"npm": ">=6.14.10"
}
This error appears to be non-sensical - am I missing something about EB?

Error when creating model version on GCP AI Platform

I am trying to create a version of the model and link it to my exported Tensorflow model. however it gives me the following error : health probe timeout: generic::unavailable: The fetch failed with status 3 and reason: UNREACHABLE_5xx Check the url is available and that authentication parameters are specified correctly.
I have made my SaveModel directory public and have attached service-xxxxxxxxxxxx#cloud-ml.google.com.iam.gserviceaccount.com to my bucket with Storage Legacy Bucket Reader. My service account service-xxxxxxxxxxxx#cloud-ml.google.com.iam.gserviceaccount.com has role ML Engine Admin and Storage Admin. The bucket and ml-engine are part of the same project and region us-central1. I am initialising the model version with the following config:
Python version: 2.7
Framework: TensorFlow
Framework version: 1.12.3
Runtime version: 1.12
Machine type: n1-highmem-2
Accelerator: Nvidia Tesla K-80
Accelerator count: 1
Note : I used python 2.7 for training and runtime version 1.12
Can you verify Saved model is valid by using CLI.
Check that Serving tag-sets are available in your saved model, use the SavedModel CLI:
saved_model_cli show --dir <your model directory>

Filebeat and AWS Elasticsearch - Not Working

I have good experience in working with Elasticsearch, I have worked with version 2.4 and now trying to learn new Elasticsearch.
I am trying to implement Filebeat to send my apache and system logs to my Elasticsearch endpoint. To save my time I preferred to launch a t2.medium single node instance over AWS Elasticsearch Service under the public domain and I have attached the access policy to allow everyone to access the cluster.
The AWS Elasticsearch instance is up and running healthy.
I launched a Ubuntu(18.04) server, downloaded the filebeat tar and made the following configuration in filebeat.yml:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://my-public-test-domain.ap-southeast-1.es.amazonaws.com:443"]
18.04- # Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
I enabled the required modules :
filebeat modules enable system apache
Then as per the filebeat documentation I changed the ownership of the filebeat file and started the filebeat with the following commands :
sudo chown root filebeat.yml
sudo ./filebeat -e
When I started the filebeat I faced the following permission and ownership issues :
Error loading config from file '/home/ubuntu/beats/filebeat-7.2.0-linux-x86_64/modules.d/system.yml', error invalid config: config file ("/home/ubuntu/beats/filebeat-7.2.0-linux-x86_64/modules.d/system.yml") must be owned by the user identifier (uid=0) or root
To resolve this I changed the ownership for the files which were throwing errors.
When I restarted the filebeat service , I started facing the following issue :
Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: unauthorized access, could not connect to the xpack endpoint, verify your credentials
Going through this link , I found that to work with AWS Elasticsearch I will need Beats OSS versions.
So I again downloaded the OSS version for beat from this link and followed the same procedure as above, but still no luck. Now I am facing the following errors :
Error 1:
Attempting to reconnect to backoff(elasticsearch(https://my-public-test-domain.ap-southeast-1.es.amazonaws.com:443)) with 12 reconnect attempt(s)
Error 2:
Failed to connect to backoff(elasticsearch(https://my-public-test-domain.ap-southeast-1.es.amazonaws.com:443)): Connection marked as failed because the onConnect callback failed: 1 error: Error loading pipeline for fileset system/auth: This module requires an Elasticsearch plugin that provides the geoip processor. Please visit the Elasticsearch documentation for instructions on how to install this plugin. Response body: {"error":{"root_cause":[{"type":"parse_exception","reason":"No processor type exists with name [geoip]","header":{"processor_type":"geoip"}}],"type":"parse_exception","reason":"No processor type exists with name [geoip]","header":{"processor_type":"geoip"}},"status":400}
From the second error I can understand that the geoip plugin is not available because of which I facing this error.
What else needs to be done to get this working?
Has anyone been to successfully connect Beats to AWS Elasticsearch?
What other steps I could to take to mitigate the above issue?
Envrionment Details:
AWS Elasticsearch Version : 6.7
File Beat : 7.2.0
First, you need to use OSS version of filebeat with AWS ES https://www.elastic.co/downloads/beats/filebeat-oss
Second, AWS ElasticSearch does not provide GeoIP module, so you will need to edit pipelines for any of the default modules you want to use, and make sure GeoIP is removed/commented out.
For example in /usr/share/filebeat/module/system/auth/ingest/pipeline.json (that's the path when installed from deb package - your path will be different of course) comment out:
{
"geoip": {
"field": "source.ip",
"target_field": "source.geo",
"ignore_failure": true
}
},
Repeat the same for apache module.
I've spent hours trying to make filebeat iis module works with AWS elasticsearch. I kept getting ingest-geoip error, Below fixed the issue.
For windows iis logs, AWS elasticsearch remove geoip from filebeat module configuration:
C:\Program Files (x86)\filebeat\module\iis\access\ingest\default.json
C:\Program Files (x86)\filebeat\module\iis\access\manifest.yml
C:\Program Files (x86)\filebeat\module\iis\error\ingest\default.json
C:\Program Files (x86)\filebeat\module\iis\error\manifest.yml

Pushing docker image through jenkins

I'm pushing docker image through Jenkins pipeline, but I'm getting the following error:
ERROR: Could not find credentials matching
gcr:["google-container-registry"]
I tried with:
gcr:["google-container-registry"]
gcr:[google-container-registry]
gcr:google-container-registry
google-container-registry
but none of them worked.
In the global credentials I have:
NAME: google-container-registry
KIND: Google Service Account from private key
DESCRIPTION: A Google robot account for accessing Google APIs and
services.
The proper syntax is the following (provided your gcr credentials id is 'google-container-registry'):
docker.withRegistry("https://gcr.io", "gcr:google-container-registry") {
sh "docker push [your_image]"
}
check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
Example:
stage("docker build"){
Img = docker.build(
"gcpProjectId/imageName:imageTag",
"-f Dockerfile ."
)
}
stage("docker push") {
docker.withRegistry('https://gcr.io', "gcr:credential-id") {
Img.push("imageTag")
}
}
Go to Jenkins → Manage Jenkins → Manage Plugins and install plugins:
Google Container Registry
Google OAuth Credentials
CloudBees Docker Build and Publish
Jenkins → Credentials → Global Credentials → Add Credentials, choose desired ‘Project Name’ and upload JSON file
Jenkinsfile:
stage('Deploy Image') {
steps{
script {
docker.withRegistry( 'https://gcr.io', "gcr:${ID}" ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

Cannot create elastic beanstalk with multiple IIS applications

I am trying to migrate a production web server to AWS, the server is windows based IIS with multiple applications defined under 1 website. I have tried both Elastic Beanstalk and Cloud formation. I would prefer to elastic beanstalk, but I would be happy with anything that has auto scaling and a easy deployment routine.
I have created a sample website with one child application, it works fine locally. I tried to edit the default AMI for elastic beanstalk to add the extra application and deploy to it. When I tried to redeploy the application with the new AMI, it failed finish the deployment it failed with the following error.
[Instance: i-3f13bc11 Module: AWSEBAutoScalingGroup ConfigSet:
Infra-WriteRuntimeConfig, Infra-WriteApplication1,
Infra-WriteApplication2, Infra-EmbeddedPreBuild, Hook-PreAppDeploy,
Hook-EnactAppDeploy, Infra-EmbeddedPostBuild, Hook-PostAppDeploy]
Command failed on instance. Return code: 1 Output: null.
I did try to use the cloud-formation template that comes with the visual studio, it did not work either, it failed with a very similar error message.
The best way to do this is to use CloudFormation to create an auto scale group. In the LaunchConfiguration type you can add your files from S3 and instruct IIS to install the apps. For example:
"WebAsSpotLaunchConfiguration" : {
"Type" : "AWS::AutoScaling::LaunchConfiguration",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"sources" : {
"C:\\inetpub\\wwwroot" : {
"Fn::Join" : [
"/",
[
"http://s3.amazonaws.com",
{
"Ref" : "DeployS3Bucket"
},
{
"Ref" : "DeployWebS3Key"
}
]
]
}
},
"commands" : {
"1-add-app-1" : {
"command" : "C:\\Windows\\System32\\inetsrv\\appcmd add app /site.name:MySite /path:/app1 /physicalPath:C:\inetpub\mysite\app1",
"waitAfterCompletion" : "0"
},
"2-add-app-2" : {
"command" : "C:\\Windows\\System32\\inetsrv\\appcmd add app /site.name:MySite /path:/app2 /physicalPath:C:\inetpub\mysite\app2",
"waitAfterCompletion" : "0"
}
}
}
},
I realize that if you don't already know CloudFormation, this may take some time to setup. But it's worth the investment in my opinion.