AWS CloudFormation 'UserData' Doesn't seem to work - amazon-web-services

I'm writing an AWS CloudFormation script to build an EC2 instance. I'd like to provision the instance by installing some packages, downloading some repos and running some scripts. Amazon tells me I can do this in CloudFormation with the UserData field. However, it just doesn't seem to work at all.
Here is what i'm working with currently:
DWHServer:
Type: "AWS::EC2::Instance"
Properties:
DisableApiTermination: false # no termination protection
EbsOptimized: false # optimize for elastic block store
IamInstanceProfile: !Ref DWHServerIAMIP
ImageId: "ami-5189a661" # ubunty-trusty-14.04-amd64-server-20150325
InstanceInitiatedShutdownBehavior: "terminate"
InstanceType: "t2.medium"
KeyName: !FindInMap [EnvMap, KeyPair, !Ref EnvType]
Monitoring: true
SecurityGroupIds:
- !Ref DWHServerSG
SourceDestCheck: true # ??
SubnetId: "subnet-aed2ecf6" # Stage-etl-2c
UserData: !Base64
"Fn::Join": ["", ["#!/bin/bash -xe\n", "touch ~/confirm_work.txt\n"]]
This is the most simple example. I just want it to make a file to prove that it's running. But it doesn't even do that. The docs say to look at something called /var/log/cloud-init-output.log. I looked there, but don't see anything about UserData. There does seem to be some sort of network error, but I'm not sure how to interpret it or what to do about it.
Here are the contents of the cloud-init-output.log file on the instance:
Cloud-init v. 0.7.5 running 'init-local' at Sat, 04 Mar 2017 02:40:07 +0000. Up 3.85 seconds.
Cloud-init v. 0.7.5 running 'init' at Sat, 04 Mar 2017 02:40:09 +0000. Up 6.01 seconds.
ci-info: +++++++++++++++++++++++++Net device info+++++++++++++++++++++++++
ci-info: +--------+------+-----------+---------------+-------------------+
ci-info: | Device | Up | Address | Mask | Hw-Address |
ci-info: +--------+------+-----------+---------------+-------------------+
ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . |
ci-info: | eth0 | True | 10.0.7.84 | 255.255.255.0 | 0a:3a:b0:a4:96:5d |
ci-info: +--------+------+-----------+---------------+-------------------+
ci-info: ++++++++++++++++++++++++++++++Route info++++++++++++++++++++++++++++++
ci-info: +-------+-------------+----------+---------------+-----------+-------+
ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
ci-info: +-------+-------------+----------+---------------+-----------+-------+
ci-info: | 0 | 0.0.0.0 | 10.0.7.1 | 0.0.0.0 | eth0 | UG |
ci-info: | 1 | 10.0.7.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U |
ci-info: +-------+-------------+----------+---------------+-----------+-------+
Mar 4 02:40:11 ubuntu pollinate[723]: ERROR: Network communication failed [60]\n02:40:10.394529 * Hostname was NOT found in DNS cache
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 002:40:10.407240 * Trying 91.189.94.24...
02:40:10.550022 * Connected to entropy.ubuntu.com (91.189.94.24) port 443 (#0)
02:40:10.551661 * successfully set certificate verify locations:
02:40:10.551698 * CAfile: /etc/pollinate/entropy.ubuntu.com.pem
CApath: /dev/null
02:40:10.551804 * SSLv3, TLS handshake, Client hello (1):
02:40:10.551832 } [data not shown]
02:40:10.711080 * SSLv3, TLS handshake, Server hello (2):
02:40:10.711129 { [data not shown]
02:40:10.711191 * SSLv3, TLS handshake, CERT (11):
02:40:10.711216 { [data not shown]
02:40:10.711490 * SSLv3, TLS alert, Server hello (2):
02:40:10.711520 } [data not shown]
02:40:10.711602 * SSL certificate problem: unable to get local issuer certificate
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
02:40:10.711732 * Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
2017-03-04 02:40:11,144 - util.py[WARNING]: Running seed_random (<module 'cloudinit.config.cc_seed_random' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_seed_random.pyc'>) failed
Generating public/private rsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
0c:54:09:ab:bc:b8:63:b5:6c:d2:d5:47:21:4a:38:6f root#ip-10-0-7-84
The key's randomart image is:
+--[ RSA 2048]----+
| .oo.. |
| o...o . |
| +o. . . |
| . .Eo . |
| o. .S. |
| .... . . |
| .+.o . |
| +.= |
| ..+ |
+-----------------+
Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
89:26:94:17:79:6d:45:15:fc:5f:37:95:31:2e:e9:f7 root#ip-10-0-7-84
The key's randomart image is:
+--[ DSA 1024]----+
| .. . oooo+o|
| .... o +.o|
| o .. . o o.|
| . . . . . ..+|
| . o S . .=|
| o . o|
| E|
| |
| |
+-----------------+
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
The key fingerprint is:
af:a2:c7:b3:95:5c:17:2e:ce:69:b3:f6:39:c7:67:91 root#ip-10-0-7-84
The key's randomart image is:
+--[ECDSA 256]---+
| |
| |
| . |
| . . |
| S o o .|
| . * + E |
| . + B . .|
| =. o.o..o o|
| .o.+....oo o |
+-----------------+
Cloud-init v. 0.7.5 running 'modules:config' at Sat, 04 Mar 2017 02:40:14 +0000. Up 11.53 seconds.
Generating locales... en_US.UTF-8... up-to-date
Generation complete.
Cloud-init v. 0.7.5 running 'modules:final' at Sat, 04 Mar 2017 02:40:17 +0000. Up 13.61 seconds.
+ touch /root/confirm_work.txt
Cloud-init v. 0.7.5 finished at Sat, 04 Mar 2017 02:40:17 +0000. Datasource DataSourceEc2. Up 13.83 seconds
Any tips would be greatly appreciated. Thanks!

Look at the second to last entry in the log:
+ touch /root/confirm_work.txt
The command is indeed invoked. Note that all commands in your EC2 user data will show up in that log file (/var/log/cloud-init-output.log) with a PLUS sign prepended to it (like above). Is it possible that the touch command is not there? That would be surprising. But add an "echo" command before the touch, you should see the output and that would confirm that it's all working fine. Maybe you're trying to touch in a directory you don't have access. Maybe try to touch a file in /tmp to narrow things down...

Protip: Always use fully qualified paths in scripts. Try this for your userdata. Does it help?
UserData: !Base64
"Fn::Join": ["\n", ["#!/bin/bash -xe", "/bin/touch /tmp/confirm_work.txt"]]

Related

Problem App Engine app to connect to MySQL in CloudSQL

I've configured SQL second gen. instance and App Engine application (Python 2.7) in one project. I've made necessary settings according to that page.
app.yaml
runtime: python27
api_version: 1
threadsafe: true
env_variables:
CLOUDSQL_CONNECTION_NAME: coral-heuristic-215610:us-central1:db-basic-1
CLOUDSQL_USER: root
CLOUDSQL_PASSWORD: xxxxxxxxx
beta_settings:
cloud_sql_instances: coral-heuristic-215610:us-central1:db-basic-1
libraries:
- name: lxml
version: latest
- name: MySQLdb
version: latest
handlers:
- url: /main
script: main.app
Now as I try to connect from the app (inside Cloud Shell), the error:
OperationalError: (2002, 'Can\'t connect to local MySQL server through socket \'/var/run/mysqld/mysqld.sock\' (2 "No such file or directory")')
Direct connection works:
$ gcloud sql connect db-basic-1 --user=root
was successful...
MySQL [correction_dict]> SHOW PROCESSLIST;
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| 9 | root | localhost | NULL | Sleep | 4 | | NULL |
| 10 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112306 | root | 35.204.173.246:59210 | correction_dict | Query | 0 | starting | SHOW PROCESSLIST |
| 112357 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112368 | root | localhost | NULL | Sleep | 0 | | NULL |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
I've authorized IP to connect to the Cloud SQL instance:
Any hints, help?
Google AppEngine Standard provides a unix socket at /cloudsql/[INSTANCE_CONNECTION_NAME] that automatically connects you to your CloudSQL instance. All you need to do is connect to it at that address. For the MySQLDb library, that looks like this:
db = MySQLdb.connect(
unix_socket=cloudsql_unix_socket,
user=CLOUDSQL_USER,
passwd=CLOUDSQL_PASSWORD)
(If you are running AppEngine Flexible, connecting is different and can be found here)

Opendaylight bundles in GracePeriod and cluster not coming up

We are using ODL Nitrogen version. When we perform warm start (ie., restart Karaf servers, without deleting "KARAF_HOME/data" folder following bundles are in "GracePeriod" state for a long time, hence other application bundles that are dependent on this are failing. However when we start Karaf in a clean (without data folder) state, all bundles comes up fine.
We also noticed, netty.tcp port 2550 is not getting binded when bundles goes into failure state. Confirmed this port is not being used by other process also.
349 | GracePeriod | 80 | 2.3.3 | mdsal-eos-binding-adapter
350 | Active | 80 | 2.3.3 | mdsal-eos-binding-api
351 | Active | 80 | 2.3.3 | mdsal-eos-common-api
352 | Active | 80 | 2.3.3 | mdsal-eos-common-spi
376 | GracePeriod | 80 | 2.3.3 | mdsal-singleton-dom-impl
142 | Active | 80 | 2.4.20 | akka-actor
143 | Active | 80 | 2.4.20 | akka-cluster
144 | Active | 80 | 2.4.20 | akka-osgi
145 | Active | 80 | 2.4.20 | akka-persistence
146 | Active | 80 | 2.4.20 | akka-protobuf
147 | Active | 80 | 2.4.20 | akka-remote
148 | Active | 80 | 2.4.20 | akka-slf4j
149 | Active | 80 | 2.4.20 | akka-stream
310 | Active | 80 | 1.6.3 | org.opendaylight.controller.sal-akka-raft
We also observe following logs rolling continuously and only this message is coming very frequently. It seems that its not allowing any other bundles to co-perform.
2018-07-02 22:52:47,299 | WARN | saction-25-27'}} | 298 - org.opendaylight.controller.config-manager - 0.7.3 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='binding-broker-impl', instanceName='binding-broker-impl'} did not finish after 84984 ms
2018-07-02 22:52:50,717 | ERROR | rint Extender: 3 | 325 - org.opendaylight.controller.sal-distributed-datastore - 1.6.3 | AbstractDataStore | Shard leaders failed to settle in 90 seconds, giving up
Diag output of Graceperiod bundle
karaf#virtuora>diag 349
mdsal-eos-binding-adapter (349)
-------------------------------
Status: GracePeriod
Blueprint
7/3/18 6:17 PM
Missing dependencies:
(objectClass=org.opendaylight.mdsal.binding.dom.codec.api.BindingNormalizedNodeSerializer) (objectClass=org.opendaylight.mdsal.eos.dom.api.DOMEntityOwnershipService)
karaf#virtuora>diag 376
mdsal-singleton-dom-impl (376)
------------------------------
Status: GracePeriod
Blueprint
7/3/18 6:22 PM
Missing dependencies:
(objectClass=org.opendaylight.mdsal.eos.dom.api.DOMEntityOwnershipService)
Please let us know
why akka is unable to open netty tcp port
why DOMEntityOwnershipService and BindingNormalizedNodeSerializer
You need to set SO_REUSEADDR to enable the port to be directly reused after it is closed. See https://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#SO_REUSEADDR
If you do not set this option then the port will stay blocked for a while dependent on the operation system.
You should also not forcefully kill a process if possible as this does not cleanly shut down the ports.

Launch an openstack instance with two NIC

I want to launch a instance with two vNic card . On one vnic i want use for private network and other vNic use for public network, How we can do that in openstack ?
you need to boot the image with twice the --nic net-id= argument as in the example bellow:
nova boot --image cirros-0.3.3-x86_64 --flavor tiny_ram_small_disk --nic net-id=b7ab2080-a71a-44f6-9f66-fde526bb73d3 --nic net-id=120a6fde-7e2d-4856-90ee-5609a5f3035f --security-group default --key-name bob-key CIRROSone
here is the result
root#columbo:~# nova list
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
| d75ef5b3-060d-4ec0-9ddf-a3685a7f1199 | CIRROSone | ACTIVE | - | Running | SecondVlan=5.5.5.4; SERVER_VLAN_1=10.255.1.16 |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+

NoHttpResponseException on uploading file to S3 (camel-aws)

I am trying to upload around 10 GB file from my local machine to S3 (inside a camel route). Although file gets uploaded in around 3-4 minutes, but it also throwing following exception:
2014-06-26 13:53:33,417 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Download complete to local. Pushing file to S3
2014-06-26 13:54:19,465 | INFO | manager-worker-6 | AmazonHttpClient | 144 - org.apache.servicemix.bundles.aws-java-sdk - 1.5.1.1 | Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)[141:org.apache.httpcomponents.httpcore:4.2.4]
.......
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_55]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
2014-06-26 13:55:08,991 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Upload complete.
Due to which camel route doesn't stop and it is continuously throwing InterruptedException:
2014-06-26 13:55:11,182 | INFO | ads.com/outbound | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Disconnecting from cxportal.integralads.com port 22
2014-06-26 13:55:11,183 | INFO | lads.com session | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Caught an exception, leaving main loop due to Socket closed
2014-06-26 13:55:11,183 | WARN | lads.com session | eventadmin | 139 - org.apache.felix.eventadmin - 1.3.2 | EventAdmin: Exception: java.lang.InterruptedException
java.lang.InterruptedException
at EDU.oswego.cs.dl.util.concurrent.LinkedQueue.offer(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor.execute(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.DefaultThreadPool.executeTask(DefaultThreadPool.java:101)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.AsyncDeliverTasks.execute(AsyncDeliverTasks.java:105)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.handler.EventAdminImpl.postEvent(EventAdminImpl.java:100)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.adapter.LogEventAdapter$1.logged(LogEventAdapter.java:281)[139:org.apache.felix.eventadmin:1.3.2]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fire(LogReaderServiceImpl.java:134)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fireEvent(LogReaderServiceImpl.java:126)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.handleEvents(PaxLoggingServiceImpl.java:180)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:145)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.apache.camel.component.file.remote.SftpOperations$JSchLogger.log(SftpOperations.java:359)[110:org.apache.camel.camel-ftp:2.12.1]
at com.jcraft.jsch.Session.run(Session.java:1621)[109:org.apache.servicemix.bundles.jsch:0.1.49.1]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
Please see my code below and let me know, where I am going wrong:
TransferManager tm = new TransferManager(
S3Client.getS3Client());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
Utils.getProperty(Constants.BUCKET),
getS3Key(file.getName()), file);
try {
upload.waitForCompletion();
logger.info("Upload complete.");
} catch (AmazonClientException amazonClientException) {
logger.warn("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
The stacktrace doesn't even have any reference to my code, hence couldn't determine where the issue is.
Any help or pointer would be really appreciated.
Thanks

Error calling an external SOAP web service with WSClient

I have created a SOAP webservice that I would like to access from Grails.
I have installed the plugin ws-client in order to use the object WSClient.
I have tried with the example given here: http://groovy.codehaus.org/Using+WSClient+in+Grails
So my code is:
def index = {
def proxy = new WSClient("http://www.w3schools.com/webservices/tempconvert.asmx?WSDL", this.class.classLoader)
proxy.initialize()
def result = proxy.CelsiusToFahrenheit(0)
result = "You are probably freezing at ${result} degrees Farhenheit"
flash.message = result
}
This is the error I get:
javac: target release 1.5 conflicts with default source release 1.7
| Error 2013-02-27 17:47:06,901 [http-bio-8080-exec-10] ERROR errors.GrailsExceptionResolver - JAXBException occurred when processing request: [POST] /WordGame/game/create
"org.tempuri" doesnt contain ObjectFactory.class or jaxb.index. Stacktrace follows:
Message: "org.tempuri" doesnt contain ObjectFactory.class or jaxb.index
Line | Method
->> 197 | createContext in com.sun.xml.bind.v2.ContextFactory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 172 | newInstance in javax.xml.bind.ContextFinder
| 132 | newInstance . in ''
| 334 | find in ''
| 431 | newInstance . in javax.xml.bind.JAXBContext
| 349 | createClient in org.apache.cxf.endpoint.dynamic.DynamicClientFactory
| 196 | createClient in ''
| 175 | createClient in ''
| 198 | createClient in groovyx.net.ws.AbstractCXFWSClient
| 107 | initialize in groovyx.net.ws.WSClient
| 30 | conversion . in wordgame.GameController$$ENyfXWG9
| 42 | doCall in wordgame.GameController$_closure1$$ENyfXWG9
| 195 | doFilter . . in grails.plugin.cache.web.filter.PageFragmentCachingFilter
| 63 | doFilter in grails.plugin.cache.web.filter.AbstractFilter
| 1110 | runWorker . . in java.util.concurrent.ThreadPoolExecutor
| 603 | run in java.util.concurrent.ThreadPoolExecutor$Worker
^ 722 | run . . . . . in java.lang.Thread
I know there is no error calling the method proxy.CelsiusToFahrenheit(0) because I have the same error just doing:
def proxy = new WSClient("http://www.w3schools.com/webservices/tempconvert.asmx?WSDL", this.class.classLoader)
proxy.initialize()
I have tried with an other webservice I have created but I have the same error.
I have search on Google and I have seen a lot of people having this issue and I didn't find how to fix it.
Config:
Windows 7 x64
Netbeans 7.2.1
Grails 2.2.0
Do someone know how to fix this problem?
Thanks for the answer but the problem was from plugins. In order to work, It was needed to install the plugins:
cxf
cxf-client
Installing these two plugins resolved the problem.
From the error message
javac: target release 1.5 conflicts with default source release 1.7
I think it's about JDK version. WSClient needs JDK to compile something in the run-time, so you have to deploy a 1.7 version JDK to support that.