i'm running through a simple and useless toy using PCF on azure, trying to create and run the stream 'time | log'
i successfully get SCDF started, and the stream created, but when i try to deploy the stream, SCDF creates two (cf) apps that won't run - they exist as far as cf-apps is concerned
○ → cf apps
Getting apps in org tess / space tess as admin...
OK
name requested state instances memory disk urls
yascdf-server started 1/1 2G 2G yascdf-server.apps.cf.tess.info
yascdf-server-LE7xs4r-tess-log stopped 0/1 512M 2G yascdf-server-LE7xs4r-tess-log.apps.cf.tess.info
yascdf-server-LE7xs4r-tess-time stopped 0/1 512M 2G yascdf-server-LE7xs4r-tess-time.apps.cf.tess.info
if i try to view the logs for either, nothing ever returns. but the logs in apps manager look like this:
2017-08-10T10:24:42.147-04:00 [API/0] [OUT] Created app with guid de8fee78-0902-4df7-a7ae-bba8a7710dca
2017-08-10T10:24:43.314-04:00 [API/0] [OUT] Updated app with guid de8fee78-0902-4df7-a7ae-bba8a7710dca ({"route"=>"97e1d26b-d950-479e-b9df-fe1f3b0c8a74", :verb=>"add", :relation=>"routes", :related_guid=>"97e1d26b-d950-479e-b9df-fe1f3b0c8a74"})
the routes don't work:
404 Not Found: Requested route ('yascdf-server-LE7xs4r-tess-log.apps.cf.tess.info') does not exist.
and trying to (re)start the route i get:
○ → cf start yascdf-server-LE7xs4r-tess-log
Starting app yascdf-server-LE7xs4r-tess-log in org tess / space tess as admin...
Staging app and tracing logs...
The app package is invalid: bits have not been uploaded
FAILED
here's the SCDF shell stuff i ran, if this helps:
server-unknown:>dataflow config server http://yascdf-server.apps.cf.tess.info/
Successfully targeted http://yascdf-server.apps.cf.cfpush.info/
dataflow:>app import --uri http://.../1-0-4-GA-stream-applications-rabbit-maven
Successfully registered applications: [<chop>]
dataflow:>stream create tess --definition "time | log"
Created new stream 'tess'
dataflow:>stream deploy tess
Deployment request has been sent for stream 'tess'
dataflow:>
anyone know what's going on here? i'd be grateful for a nudge...
Spring Cloud Data Flow: Server
1.2.3 (using built spring-cloud-dataflow-server-cloudfoundry-1.2.3.BUILD-SNAPSHOT.jar)
Spring Cloud Data Flow: Shell
1.2.3 (using downloaded spring-cloud-dataflow-shell-1.2.3.RELEASE.jar)
Deployment Environment
PCF v1.11.6 (on Azure)
pcf dev v0.26.0 (on mac)
App Starters
http://bit-dot-ly/1-0-4-GA-stream-applications-rabbit-maven
Logs
stream deploy log
It has been identified that java-buildpack 4.4 (JBP4) was used by OP and by running SCDF against this version, there is an issue with memory allocation in reactor-netty (used by JBP4 internally), which causes the out-of-memory error. The reactor team is addressing this issue in the upcoming 0.6.5 release. JBP4 will adapt to it eventually.
With all this said, SCDF is still not compatible with JPB4. It is recommended to downgrade to JPB 3.19 or latest in this release line instead.
Related
After my app is successfully pushed via cf I usually need do manually ssh-log into the container and execute a couple of PHP scripts to clear and warmup my cache, potentially execute some DB schema updates etc.
Today I found out about Cloudfoundry Tasks which seems to offer a pretty way to do exactly this kind of things and I wanted to test it whether I can integrate it into my build&deploy script.
So used cf login, got successfully connected to the right org and space, app has been pushed and is running and I tried this command:
cf run-task MYAPP "bin/console doctrine:schema:update --dump-sql --env=prod" --name dumpsql
(tried it with a couple of folder changes like app/bin/console etc.)
and this was the output:
Creating task for app MYAPP in org MYORG / space MYSPACE as me#myemail...
Unexpected Response
Response Code: 404
FAILED
Uses CF CLI: 6.32.0
cf logs ArcticTenTestBackend --recent does not output anything (this might be the case because I have enabled an ELK instance for logging - as I wanted to service-connect to ELK to look up the logs I found out that the service-connector cf plugin is gone for which I will open a new ticket).
Created new Issue for that: https://github.com/cloudfoundry/cli/issues/1242
This is not a CF CLI issue. Swisscom Application Cloud does not yet support the Cloud Foundry tasks. This explains the 404 you are currently receiving. We will expose this feature of Cloud Foundry in an upcoming release of Swisscom Application Cloud.
In the meantime, maybe you can find a way to execute your one-off tasks (cache warming, DB migrations) at application startup.
As mentioned by #Mathis Kretz Swisscom has gotten around to enable cf run-task since this question was posted. They send out e-mails on 22. November 2018 to announce the feature.
As discussed on your linked documentation you use the following commands to manage tasks:
cf tasks [APP_NAME]
cf run-task [APP_NAME] [COMMAND]
cf terminate-task [APP_NAME] [TASK_ID]
I am running my Wildfly 10.1.0 server on Linux OS on Amazon EC2 instance. I have written start and stop scripts for the server. Whenever I stop my server and re-start after some time I get the following exception -
WFLYCTL0013: Operation ("add") failed - address: ([("deployment" => "rapid.ear")]) - failure description: "WFLYSRV0137: No deployment content with hash dd66eee901c4bf79dd6659873df918e1b639bc1b is available in the deployment content repository for deployment 'rapid.ear'. This is a fatal boot error. To correct the problem, either restart with the --admin-only switch set and use the CLI to install the missing content or remove it from the configuration, or remove the deployment from the xml configuration file and restart."
When I remove the entry for that WAR from standalone.xml I am able to restart the server, but I need a more permanent solution.
The start script written is -
nohup /data/wildfly-10.1.0.Final/bin/standalone.sh -Djavax.net.ssl.trustStore="/usr/java/jdk1.8.0_121/jre/lib/security/jssecacerts" --server-config=standalone.xml &
And the stop script is -
sh /data/wildfly-10.1.0.Final/bin/jboss-cli.sh --connect command=:shutdown
It may not be quite as efficient in terms of I/O but if you've got a standalone instance I've just taken advantage of the deployment scanner. I have:
<subsystem xmlns="urn:jboss:domain:deployment-scanner:2.0">
<deployment-scanner name="myapp" path="/home/wildfly/sites/www.mysite.tld" scan-interval="60000" auto-deploy-exploded="true"/>
</subsystem>
in my standalone-full.xml (you may or may not need the "-full" part). I then deploy my webapp to "/home/wildfly/sites/www.mysite.tld" and can update it as needed. The code I show only reads the directory once a minute so it isn't terrible on I/O.
Again, your deployment may be different than mine.
Installing Cloudera Manager on an AWS EC2 instance, following the official instruction:
http://www.cloudera.com/documentation/archive/manager/4-x/4-6-0/Cloudera-Manager-Installation-Guide/cmig_install_on_EC2.html
I successfully run the .bin package, but when I visit the IP:7180 , the browser says my access has been denied...Why ...
I tried to confirm the status of cm server: service cloudera-scm-server status. At first it said
cloudera-scm-server is dead and pid file exists
The log file showed mentioned "unknown host ip-10-0-0-110" then I add a map between ip-10-0-0-110 and the EC2 instance **public** ip. Then restart the scm-server service. It could run normally, but the IP:7180 remained unaccessable, saying ERR_CONNECTION_REFUSED. I have uninstalled both the iptables and closed my windows firewall.
After a few minute, cloudera-scm-server is dead and pid file exists appeared again...
Using: tail -40 /var/log/cloudera-scm-server/cloudera-scm-server.out
JAVA_HOME=/usr/lib/jvm/java-7-oracle-cloudera Java HotSpot(TM) 64-Bit
Server VM warning: INFO: os::commit_memory(0x0000000794223000,
319201280, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 319201280 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/hs_err_pid5523.log
What type of EC2 instance are you using? The error is pretty descriptive and indicates that CM is unable to access memory. Maybe you are using an instance type with too little RAM.
Also - the docs you are referencing are out of date. The latest docs on deploying CDH5 in the cloud can be found here: https://www.cloudera.com/documentation/director/latest/topics/director_get_started_aws.html
These docs also recommend using Cloudera Director which will simplify much of the deployment and configuration of your cluster.
I am attempting to deploy some changes to a loopback app running on a remote Ubuntu box on top of strong-pm.
The changes that I make locally are not being reflected in what gets deployed to the server. Here are the commands I execute:
$slc build
$slc deploy http://IPADDRESS deploy
to which I get a successful deploy message which looks like this:
peter#peters-MacBook-Pro ~/Desktop/projects/www/places-api master slc deploy http://PADDRESS deploy
Counting objects: 5740, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (5207/5207), done.
Writing objects: 100% (5740/5740), 7.14 MiB | 2.80 MiB/s, done.
Total 5740 (delta 1555), reused 150 (delta 75)
To http://PADDRESS:8701/api/services/1/deploy/default
* [new branch] deploy -> deploy
Deployed `deploy` as `placesAPI` to `http://IPADDRESS:8701/`
Checking the deployed files on the server here :
/var/lib/strong-pm/svc/1/work
I can see that the changes I made to the local app are not reflected in what has just been deployed to the server.
In order to check that the changes are reflected in the build, I checked out the deploy git repository, like so:
git checkout deploy
Inspecting the files here, I can see that the changes I made are present.
**does anyone know why the changes are not reflected in what is deployed to the server ? **
I know this is a old post but for anyone getting this issue I just encountered the same problem.
Finally I used slc arc and tried to Build from there.
Make sure that the "Fully qualified path to archive" has a correct value
It should be something like
../project-1.0.0.tgz
I have a problem with my application logs on my Cloudfoundry deployment.
I've deployed Cloudfoundry in a something minimized design based on the tiny-aws deployment of https://github.com/cloudfoundry-community/cf-boshworkspace.
I further minimized the deployment and put everything from the VMs "api", "backbone", "health" and "services" together on the api-machines.
So I have the following VMs:
api (2 instances)
data (1 instance)
runner (2 instances)
haproxy (1 public and 1 private proxy)
Cloudfoundry version is 212.
The deployment itself seems to work. I can deploy apps and they start up.
But the logs from my applications don't show up when I run
"cf logs my-app --recent"
I've tried several log-configurations in my spring-boot-app.
standard without modifications which should log to STDOUT according to spring-boot documentation
exlicitly set a log4j.properties file which was configured to log to STDOUT as well
a log4j-2 configuration for logging on STDOUT
a spring-boot configuration which logs to a file
In the last configuration, the file was created and my logs was shown when I ran "cf files my-app log/my-app.log"
I tried to debug where my logs are lost, but I couldn't find something.
The dea_logging_agent seems to run and has the correct NATS location configured, the dea itself too.
Loggregator seems to run well on the api-host too and seems to be connected to NATS too.
So my question is: In which locations should I search to find out where my logs go?
Thank you very much.