Spring Boot - SQS Connection Issue - amazon-web-services

I have created a simple springboot application that just sends a message to SQS. while starting
the app, it is trying to connect to EC2 metadata instance and failing, I tried to set that to false, but it still fails with different error message. Can someone please help me understand what am I missing.
build.gradle file with the dependencies,I tried with multiple spring boot version also
plugins {
id 'org.springframework.boot' version '2.0.5.RELEASE'
id 'io.spring.dependency-management' version '1.0.10.RELEASE'
id 'java'
id 'war'
}
group = 'com.springboot.aws.ebs'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
repositories {
mavenCentral()
}
dependencyManagement {
imports {
mavenBom 'com.amazonaws:aws-java-sdk-bom:1.10.47'
//mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
dependencies {
compile 'com.amazonaws:aws-java-sdk-s3'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-web'
compile group: 'com.amazonaws', name: 'aws-java-sdk-core', version: '1.11.890'
compile group: 'com.amazonaws', name: 'aws-java-sdk-sqs', version: '1.11.890'
compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-aws', version: '2.1.2.RELEASE'
compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-aws-messaging', version: '2.1.2.RELEASE'
providedRuntime 'org.springframework.boot:spring-boot-starter-tomcat'
testImplementation('org.springframework.boot:spring-boot-starter-test') {
exclude group: 'org.junit.vintage', module: 'junit-vintage-engine'
}
}
test {
useJUnitPlatform()
}
application.yml file I used to configure the connection details
cloud:
aws:
region:
static: <region>
auto: false
credentials:
access-key: <access key>
secret-key: <secret key>
end-point:
uri: <end-point>
I looked into a few stackoverflow post with the same issue, but none of them helped. thanks in advance for your help.
Error Details
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
it tries to connect to EC2 metadata instance (http://169.254.169.254)
Caused by: java.net.ConnectException: Host is down (connect failed)

Related

Data Prepper Pipelines + OpenSearch Trace Analytics

I'm using the latest version of AWS OpenSearch but somehow, when I'm trying to go to the Trace analytics Dashboard it does not show the traces sent by the Data Prepper.
Manual OpenTelemetry instrumented application
Data Prepper is running in a Docker (opensearchproject/data-prepper:latest)
OpenSearch is running on the latest version
Sample Configuration
data-prepper-config.yaml
ssl: false
pipelines.yaml
entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: [ "https://opensearch-domain" ]
username: "admin"
password: "admin"
index_type: trace-analytics-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: ["https://opensearch-domain"]
username: "admin"
password: "admin"
index_type: trace-analytics-service-map
remote-collector.yaml
...
exporters:
otlp/data-prepper:
endpoint: data-prepper-address:21890
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/data-prepper]
When I try to go to the Query Workbench and run the query SELECT * FROM otel-v1-apm-span, I'm getting the list of received trace spans. But I'm unable to see a chart or something on the Trace Analytics Dashboard (both Traces and Services). It's just an empty dashboard.
I'm also getting a warning:
WARN org.opensearch.dataprepper.plugins.processor.oteltrace.OTelTraceRawProcessor - Missing trace group for SpanId: xxxxxxxxxxxx
The traceGroupFields are also empty.
"traceGroupFields": {
"endTime": null,
"durationInNanos": null,
"statusCode": null
}
Is there something wrong with my setup? Any help is appreciated.

Deploying a model to the regional endpoint on AI Platform Prediction succeeds, but deploying the same model to the global endpoint fails

I have a scikit-learn model saved in Cloud Storage which I am attempting to deploy with AI Platform Prediction. When I deploy this model to a regional endpoint, the deployment completes successfully:
➜ gcloud ai-platform versions describe regional_endpoint_version --model=regional --region us-central1
Using endpoint [https://us-central1-ml.googleapis.com/]
autoScaling:
minNodes: 1
createTime: '2020-12-30T15:21:55Z'
deploymentUri: <REMOVED>
description: testing deployment to a regional endpoint
etag: <REMOVED>
framework: SCIKIT_LEARN
isDefault: true
machineType: n1-standard-4
name: <REMOVED>
pythonVersion: '3.7'
runtimeVersion: '2.2'
state: READY
However, when I try to deploy the exact same model, using the same Python/runtime versions, to the global endpoint, the deployment fails, saying there was an error loading the model:
(aiz) ➜ stanford_nlp_a3 gcloud ai-platform versions describe public_object --model=global
Using endpoint [https://ml.googleapis.com/]
autoScaling: {}
createTime: '2020-12-30T15:12:11Z'
deploymentUri: <REMOVED>
description: testing global endpoint deployment
errorMessage: 'Create Version failed. Bad model detected with error: "Error loading
the model"'
etag: <REMOVED>
framework: SCIKIT_LEARN
machineType: mls1-c1-m2
name: <REMOVED>
pythonVersion: '3.7'
runtimeVersion: '2.2'
state: FAILED
I tried making the .joblib object public to make sure there wasn't a permissions difference when trying to deploy to the two endpoints causing the issue, but the deployment to the global endpoint still failed. I removed the deploymentUri from the post since I have been experimenting with the permissions on this model object, but the paths are identical in the two different model versions.
The machine types for the two deployments have to be different, and for the regional deployment I use min nodes = 1 while for global I can use min nodes = 0, but other than that and the etags everything else is exactly the same.
I couldn't find any information in the AI Platform Prediction regional endpoints docs page which indicated certain models could only be deployed to a certain type of endpoint. The "Error loading the model" error message doesn't give me a lot to go on since it doesn't appear to be a permissions issue with the model file.
When I add the --log-http option to the create version command, I see that the errorcode is 3, but the message doesn't reveal any additional information:
➜ ~ gcloud ai-platform versions create $VERSION_NAME \
--model=$MODEL_NAME \
--origin=$MODEL_DIR \
--runtime-version=2.2 \
--framework=$FRAMEWORK \
--python-version=3.7 \
--machine-type=mls1-c1-m2 --log-http
Using endpoint [https://ml.googleapis.com/]
=======================
==== request start ====
...
...
the final response from the server looks like this:
---- response start ----
status: 200
-- headers start --
<headers>
-- headers end --
-- body start --
{
"name": "<name>",
"metadata": {
"#type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2020-12-30T22:53:30Z",
"startTime": "2020-12-30T22:53:30Z",
"endTime": "2020-12-30T22:54:37Z",
"operationType": "CREATE_VERSION",
"modelName": "<name>",
"version": {
<version info>
}
},
"done": true,
"error": {
"code": 3,
"message": "Create Version failed. Bad model detected with error: \"Error loading the model\""
}
}
-- body end --
total round trip time (request+response): 0.096 secs
---- response end ----
----------------------
Creating version (this might take a few minutes)......failed.
ERROR: (gcloud.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Error loading the model"
Can anyone explain what I am missing here?

GCP Deployment Manager "ResourceErrorCode":"400" while database user creation

I am experimenting with deployment manager and each time I try to deploy an SQL instance with a DB on it and 2 users; some of the tasks are failing. Most of the time they are the users:
conf.yaml:
resources:
- name: mycloudsql
type: gcp-types/sqladmin-v1beta4:instances
properties:
name: mycloudsql-01
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: MYSQL_5_7
region: europe-west6
settings:
tier: db-f1-micro
locationPreference:
zone: europe-west6-a
activationPolicy: ALWAYS
dataDiskSizeGb: 10
- name: mydjangodb
type: gcp-types/sqladmin-v1beta4:databases
properties:
name: django-db-01
instance: $(ref.mycloudsql.name)
charset: utf8
- name: sqlroot
type: gcp-types/sqladmin-v1beta4:users
properties:
name: root
host: "%"
instance: $(ref.mycloudsql.name)
password: root
- name: sqluser
type: gcp-types/sqladmin-v1beta4:users
properties:
name: user
instance: $(ref.mycloudsql.name)
password: user
Error:
PS C:\Users\user\Desktop\Python\GCP> gcloud --project=sound-catalyst-263911 deployment-manager deployments create dm-sql-test-11 --config conf.yaml
The fingerprint of the deployment is TZ_wYom9Q64Hno6X0bpv9g==
Waiting for create [operation-1589869946223-5a5fa71623bc9-1912fcb9-bc59aafc]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1589869946223-5a5fa71623bc9-1912fcb9-bc59aafc]: errors:
- code: RESOURCE_ERROR
location: /deployments/dm-sql-test-11/resources/sqluser
message: '{"ResourceType":"gcp-types/sqladmin-v1beta4:users","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Precondition
check failed.","status":"FAILED_PRECONDITION","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/sound-catalyst-263911/instances/mycloudsql-01/users","httpMethod":"POST"}}'
- code: RESOURCE_ERROR
location: /deployments/dm-sql-test-11/resources/sqlroot
message: '{"ResourceType":"gcp-types/sqladmin-v1beta4:users","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Precondition
check failed.","status":"FAILED_PRECONDITION","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/sound-catalyst-263911/instances/mycloudsql-01/users","httpMethod":"POST"}}'
Console View:
It doesn`t say what that precondition failing is or am I missing something?
It seems the installation of database is not completed by the time the Deployment Manager starts to create users despite the reference notation is used in the YAML code to take care of dependencies. That is why you receive the "FAILED_PRECONDITION" error.
As a workaround you can split the deployment into two parts:
Create a CloudSQL instance and a database;
Create users.
This does not look elegant, but it works.
Alternatively, you can consider using Terraform. Fortunately, Cloud Shell instance is provided with Terraform pre-installed. There are sample Terraform code for Cloud SQL out there, for example this one:
CloudSQL deployment with Terraform

GCP Deployment Manager Delete RESOURCE_ERROR

I created a Deployment Manager Template (python) to create a GKE Zonal cluster (v1beta1 feature). When I run gcloud deployment-manager deployments create <deploymentname> --config <config.yaml>, GKE cluster is created as expected.
I used type:gcp-types/container-v1beta1:projects.zones.clusters in my python template.
However, when I run the delete command on DM i.e. gcloud deployment-manager deployments delete <deploymentname> I get the following error:
Error says that field name could not be found. However, I did specify name in my config.yaml file.
Error in Operation [operation-1536152440470-5751f5c88f9f3-5ca3a167-d12a593d]: errors:
- code: RESOURCE_ERROR
location: /deployments/test-project-gke-xhqgxn6pkd/resources/test-gkecluster-xhqgxn6pkd
message: "{"ResourceType":"gcp-types/container-v1beta1:projects.zones.clusters"
,"ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message"
:"Invalid JSON payload received. Unknown name "name": Cannot bind query
parameter. Field 'name' could not be found in request message.","status"
:"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest"
,"fieldViolations":[{"description":"Invalid JSON payload received. Unknown
name "name": Cannot bind query parameter. Field 'name' could not be found
in request message."}]}],"statusMessage":"Bad Request","requestPath"
:"https://container.googleapis.com/v1beta1/projects/test-project/zones/us-east1-b/clusters/"
,"httpMethod":"GET"}}"
Here's the sample config.yaml
imports:
- path: templates/gke/gke.py
name: gke.py
resources:
- name: ${CLUSTER_NAME}
type: gke.py
properties:
zone: ${ZONE}
cluster:
name: ${CLUSTER_NAME}
description: test gke cluster
network: ${NETWORK_NAME}
subnetwork: ${SUBNET_NAME}
initialClusterVersion: ${CLUSTER_VERSION}
nodePools:
- name: ${NODEPOOL_NAME}
initialNodeCount: ${NODE_COUNT}
config:
machineType: ${MACHINE_TYPE}
diskSizeGb: 100
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
localSsdCount: ${LOCALSSD_COUNT}
Any ideas what I'm missing here?

Deploying EmberJS to AWS using SSH + RSync

I've managed to deploy a simple todo app unto AWS with S3 using this site
http://emberigniter.com/deploy-ember-cli-app-amazon-s3-linux-ssh-rsync/
However, when I attempt to do this ( Deploying with SSH and Rsync ) according to the tutorial, I run into the following error:
gzipping **/*.{js,css,json,ico,map,xml,txt,svg,eot,ttf,woff,woff2}
ignoring null
✔ assets/ember-user-app-d41d8cd98f00b204e9800998ecf8427e.css
✔ assets/vendor-d41d8cd98f00b204e9800998ecf8427e.css
✔ assets/ember-user-app-45a9825ab0116a8007bb48645b09f060.js
✔ crossdomain.xml
✔ robots.txt
✔ assets/vendor-d008595752c8e859a04200ceb9a77874.js
gzipped 6 files ok
|
+- upload
| |
| +- rsync
- Uploading using rsync...
- Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-47/rsync/io.c(453) [sender=2.6.9]
The following is my config/deploy.js
module.exports = function(deployTarget) {
var ENV = {
build: {
environment: deployTarget
},
's3-index': {
accessKeyId: "<myKeyID>",
secretAccessKey: "<mySecret>",
bucket: "emberjsft",
region: "ap-southeast-1",
allowOverwrite: true
},
's3': {
accessKeyId: "<myKeyID>",
secretAccessKey: "<mySecret>",
bucket: "emberjsft",
region: "ap-southeast-1"
},
'ssh-index': {
remoteDir: "/var/www/",
username: "ec2-user",
host: "ec2-<elastic-ip>.ap-southeast-1.compute.amazonaws.com",
privateKeyFile: "/Users/imac/MY_AWS_PEMFILE.pem",
allowOverwrite: true
},
rsync: {
dest: "/var/www/",
username: "ec2-user",
host: "ec2-<elastic-ip>.ap-southeast-1.compute.amazonaws.com",
delete: false
}
// include other plugin configuration that applies to all deploy targets here
};
if (deployTarget === 'development') {
ENV.build.environment = 'development';
// configure other plugins for development deploy target here
}
if (deployTarget === 'staging') {
ENV.build.environment = 'production';
// configure other plugins for staging deploy target here
}
if (deployTarget === 'production') {
ENV.build.environment = 'production';
// configure other plugins for production deploy target here
}
// Note: if you need to build some configuration asynchronously, you can return
// a promise that resolves with the ENV object instead of returning the
// ENV object synchronously.
return ENV;
};
How should I resolve this issue?
Thanks
I've just spent the last hour fighting the same issue as you. I was able to kind of fix it by using ssh-add /home/user/.ssh/example-key.pem and removing privateKeyFile.
I still get a error thrown after the transfer ends, but can confirm all files successfully transferred to my EC2 box despite the error..
deploy.js
module.exports = function (deployTarget) {
var ENV = {
build: {
environment: deployTarget
},
'ssh-index': {
remoteDir: "/var/www/",
username: "ubuntu",
host: "52.xx.xx.xx",
allowOverwrite: true
},
rsync: {
host: "ubuntu#52.xx.xx.xx",
dest: "/var/www/",
recursive: true,
delete: true
}
};
return ENV;
};
In your deploy.js file you need to place your information for accessKeyId. You left "" in the place of accessKeyId. You need to put your information there. Same for secretAccessKey, acessKeyId, plus your host , you need to put your elastic-ip address.
myKeyID and mySecret shall be present in a .env file and then accessed here by process.env.myKeyID , process.env.mySecret
Not a good practice to hard-code the Keys in deploy.js file.
Best practise would be read it using Consul