I have an issue with my cloud foundry installation on vSphere. After an upgrade to 1.6 I started to get "migrator is not current errors" in the cloud controller clock and worker component. They do not come up anymore.
[2015-12-10 11:36:19+0000] ------------ STARTING cloud_controller_clock_ctl at Thu Dec 10 11:36:19 UTC 2015 --------------
[2015-12-10 11:36:23+0000] rake aborted!
[2015-12-10 11:36:23+0000] Sequel::Migrator::NotCurrentError: migrator is not current
[2015-12-10 11:36:23+0000] Tasks: TOP => clock:start
[2015-12-10 11:36:23+0000] (See full trace by running task with --trace)
After googling this I only found this mailing list https://lists.cloudfoundry.org/archives/list/cf-bosh#lists.cloudfoundry.org/message/GIOTVF2A77KREO4ESHSY7ZXZJKM5ZULA/. Can I migrate my Cloud Controller DB manually? Does anyone knows how to fix this? I'd be very grateful!
Related
Hi I have created an apache beam pipeline, tested it and ran it from inside eclipse, both locally and using dataflow runner. I can see in eclipse console that the pipeline is running I also see the details, i. e. logs on the console.
Now, how do I deploy this pipeline to GCP, so that it keeps working irrespective of the state of my machine. For e.g., if I run it using mvn compile exec:java the console shows it is running, but i can not find the job using the dataflow UI.
Also, what will happen if I kill the process locally, will the job on the GCP infrastructure also be stopped? How Do I know a job has been triggered independent of my machine`s state on the GCP infrastructure?
The maven compile exec:java with arguments output is as follows,
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/C:/Users/ThakurG/.m2/repository/org/slf4j/slf4j-
jdk14/1.7.14/slf4j-jdk14-1.7.14.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/Users/ThakurG/.m2/repository/org/slf4j/slf4j-nop/1.7.25/slf4j-nop-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]
Jan 08, 2018 5:33:22 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ main
INFO: starting the process...
Jan 08, 2018 5:33:25 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ
createStream
INFO: pipeline created::Pipeline#73387971
Jan 08, 2018 5:33:27 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ main
INFO: pie crated::Pipeline#73387971
Jan 08, 2018 5:54:57 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ$1 apply
INFO: Message received::1884408,16/09/2017,A,2007156,CLARK RUBBER FRANCHISING PTY LTD,A ,5075,6,Y,296,40467910,-34.868095,138.683535,66 SILKES RD,,,PARADISE,5075,0,7.4,5.6,18/09/2017 2:09,0.22
Jan 08, 2018 5:54:57 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ$1 apply
INFO: Payload from msg::1884408,16/09/2017,A,2007156,CLARK RUBBER FRANCHISING PTY LTD,A ,5075,6,Y,296,40467910,-34.868095,138.683535,66 SILKES RD,,,PARADISE,5075,0,7.4,5.6,18/09/2017 2:09,0.22
Jan 08, 2018 5:54:57 PM com.trial.apps.gcp.df.ReceiveAndPersistToBQ$1 apply
This is the maven command I`m using from cmd prompt,
`mvn compile exec:java -Dexec.mainClass=com.trial.apps.gcp.df.ReceiveAndPersistToBQ -Dexec.args="--project=analyticspoc-XXX --stagingLocation=gs://analytics_poc_staging --runner=DataflowRunner --streaming=true"`
This is the piece of code I`m using to create the pipeline and set the options on the same.
PipelineOptions options = PipelineOptionsFactory.create();
DataflowPipelineOptions dfOptions = options.as(DataflowPipelineOptions.class);
dfOptions.setRunner(DataflowRunner.class);
dfOptions.setJobName("gcpgteclipse");
dfOptions.setStreaming(true);
// Then create the pipeline.
Pipeline pipeL = Pipeline.create(dfOptions);
Can you clarify what exactly do you mean by "console shows it is running" and by "can not find the job using Dataflow UI"?
If your program's output prints the message:
To access the Dataflow monitoring console, please navigate to https://console.developers.google.com/project/.../dataflow/job/....
Then your job is running on the Dataflow service. Once it's running, killing the main program will not stop the job - all the main program does is periodically poll the Dataflow service for the status of the job and new log messages. Following the printed link should take you to the Dataflow UI.
If this message is not printed, then perhaps your program is getting stuck somewhere before actually starting the Dataflow job. If you include your program's output, that will help debugging.
To deploy a pipeline to be executed by Dataflow, you specify the runner and project execution parameters through the command line or via the DataflowPipelineOptions class. runner must be set to DataflowRunner (Apache Beam 2.x.x) and project is set to your GCP project ID. See Specifying Execution Parameters. If you do not see the job in the Dataflow Jobs UI list, then it is definitely not running in Dataflow.
If you kill the process that deploys a job to Dataflow, then the job will continue to run in Dataflow. It will not be stopped.
This is trivial, but to be absolutely clear, you must call run() on the Pipeline object in order for it to be executed (and therefore deployed to Dataflow). The return value of run() is a PipelineResult object which contains various methods for determining the status of a job. For example, you can call pipeline.run().waitUntilFinish(); to force your program to block execution until the job is complete. If your program is blocked, then you know the job was triggered. See the PipelineResult section of the Apache Beam Java SDK docs for all of the available methods.
I am new to Google Cloud Platform, so might be asking simple questions
I was testing StarterPipline and Word Count examples using Cloud Dataflow API and although these work locally, both fail if I try to run these pipelines on the cloud dataflow service.
I've verified that all API required are enabled and I am successfully authenticated.
There are NO messages in LOG files and the only thing I see is that request staged class files on Cloud Storage and gives "Job finished with status FAILED" before starting worker pool (log below).
Any thoughts and suggestions would be greatly appreciated!
Thanks, Vladimir
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 45 files. Enable logging at DEBUG level to see which files will be staged.
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.Pipeline applyInternal
WARNING: Transform AnonymousParDo2 does not have a stable unique name. This will prevent updating of pipelines.
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner run
INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.util.PackageUtil stageClasspathElements
INFO: Uploading 45 files from PipelineOptions.filesToStage to staging location to prepare for execution.
Sep 15, 2015 4:13:19 PM com.google.cloud.dataflow.sdk.util.PackageUtil stageClasspathElements
INFO: Uploading PipelineOptions.filesToStage complete: 0 files newly uploaded, 45 files cached
Dataflow SDK version: 1.0.0
Sep 15, 2015 4:13:20 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner run
INFO: To access the Dataflow monitoring console, please navigate to https://console.developers.google.com/project/XXXXXXXXXXXXXXXXXXX/dataflow/job/2015-09-15_07_13_20-12403932015881940310
Submitted job: 2015-09-15_07_13_20-12403932015881940310
Sep 15, 2015 4:13:20 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner run
INFO: To cancel the job using the 'gcloud' tool, run:
> gcloud alpha dataflow jobs --project=XXXXXXXXXXXXXXXXXXX cancel 2015-09-15_07_13_20-12403932015881940310
Sep 15, 2015 4:13:27 PM com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner run
INFO: Job finished with status FAILED
Exception in thread "main" com.google.cloud.dataflow.sdk.runners.DataflowJobExecutionException: Job 2015-09-15_07_13_20-12403932015881940310 failed with status FAILED
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:155)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:56)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:176)
at com.google.cloud.dataflow.starter.StarterPipeline.main(StarterPipeline.java:68)
vSphere 5.5
Ops Manager 1.3.4.0
Elastic Runtime 1.3.4.0
Ops Metrics 1.3.3.0
Install fails on step: Running errand Run Smoke Tests for Pivotal Elastic Runtime
I can't go into the VM to troubleshoot what is going on, as when smoke tests fail the smoke tests vm is removed. I can skip the smoke test errand and it will complete, but I am trying to figure out why the smoke test errand will not complete properly. Any help is greatly appreciated.
Here is a complete link to my install log https://dl.dropboxusercontent.com/u/14091323/cf-install.log
Here is an excerpt from the install log where the failure happens:
Errand push-app-usage-service' completed successfully (exit code 0)
{"type": "step_finished", "id": "errands.running.cf-9b93ae0464e2a248f279.push-app-usage-service"}
{"type": "step_started", "id": "errands.running.cf-9b93ae0464e2a248f279.smoke-tests"}
46ab6197-dd49-46f1-9631-1249406d452f
Deployment set to/var/tempest/workspaces/default/deployments/cf-9b93ae0464e2a248f279.yml'
Director task 52
Deprecation: Please use templates' when specifying multiple templates for a job.template' for multiple templates will soon be unsupported.
Deprecation: Please use templates' when specifying multiple templates for a job.template' for multiple templates will soon be unsupported.
Deprecation: Please use templates' when specifying multiple templates for a job.template' for multiple templates will soon be unsupported.
Deprecation: Please use templates' when specifying multiple templates for a job.template' for multiple templates will soon be unsupported.
Deprecation: Please use templates' when specifying multiple templates for a job.template' for multiple templates will soon be unsupported.
Deprecation: Please use templates' when specifying multiple templates for a job.template' for multiple templates will soon be unsupported.
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done (00:00:00)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:00)
Started preparing package compilation > Finding packages to compile. Done (00:00:00)
Started creating bound missing vms > smoke-tests/0. Done (00:00:37)
Started binding instance vms > smoke-tests/0. Done (00:00:00)
Started updating job smoke-tests > smoke-tests/0 (canary). Done (00:00:45)
Started running errand > smoke-tests/0. Done (00:00:38)
Started fetching logs for smoke-tests/0 > Finding and packing log files. Done (00:00:01)
Started deleting errand instances smoke-tests > vm-0207c40c-3551-4436-834d-7037871efdb5. Done (00:00:05)
Task 52 done
Started 2015-04-12 21:23:27 UTC
Finished 2015-04-12 21:25:36 UTC
Duration 00:02:09
Errand `smoke-tests' completed with error (exit code 1)[stdout]
################################################################################################################
go version go1.2.1 linux/amd64
CONFIG=/var/vcap/jobs/smoke-tests/bin/config.json
{
"suitename" : "CFSMOKETESTS",
"api" : "https://api.cf.lab.local",
"appsdomain" : "cf.lab.local",
"user" : "smoketests",
"password" : "ad445f38ca9bbf21933e",
"org" : "CFSMOKETESTORG",
"space" : "CFSMOKETESTSPACE",
"useexistingorg" : false,
"useexistingspace" : false,
"loggingapp" : "",
"runtimeapp" : "",
"skipsslvalidation": true
}CONFIG=/var/vcap/jobs/smoke-tests/bin/config.json
GOPATH=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace:/var/vcap/packages/smoke-tests
GOROOT=/var/vcap/data/packages/golang/aa5f90f06ada376085414bfc0c56c8cd67abba9c.1-f892239e5c78542d10f4d8f098d9b892c0b27bc1
OLDPWD=/var/vcap/bosh
PATH=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/_workspace/bin:/var/vcap/packages/smoke-tests/bin:/var/vcap/packages/cli/bin:/var/vcap/data/packages/golang/aa5f90f06ada376085414bfc0c56c8cd67abba9c.1-f892239e5c78542d10f4d8f098d9b892c0b27bc1/bin:/var/vcap/packages/git/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests
SHLVL=1
TMPDIR=/var/vcap/data/tmp
_=/usr/bin/env
################################################################################################################
Running smoke tests...
/var/vcap/data/packages/golang/aa5f90f06ada376085414bfc0c56c8cd67abba9c.1-f892239e5c78542d10f4d8f098d9b892c0b27bc1/bin/go
Running Suite: CF-Smoke-Tests
Random Seed: [1m1428873898[0m
Will run [1m2[0m of [1m2[0m specs
[0mLoggregator:[0m
[1mcan see app messages in the logs[0m
[37m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/loggregator_test.go:37[0m
[32m> cf api https://api.cf.lab.local --skip-ssl-validation [0m
Setting api endpoint to https://api.cf.lab.local...
FAILED
i/o timeout
[32m> cf delete-space CFSMOKETEST_SPACE -f [0m
No API endpoint targeted. Use 'cf login' or 'cf api' to target an endpoint.
[91m[1m• Failure [5.240 seconds][0m
[91m[1mLoggregator: [BeforeEach][0m
[90m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/loggregatortest.go:38[0m
can see app messages in the logs
[90m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/loggregatortest.go:37[0m
[91mExpected
<int>: 1
to match exit code:
<int>: 0[0m
/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace/src/github.com/cloudfoundry-incubator/cf-test-helpers/cf/asuser.go:39
[90m------------------------------[0m
[0mRuntime:[0m
[1mcan be pushed, scaled and deleted[0m
[37m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/runtime_test.go:62[0m
[32m> cf api https://api.cf.lab.local --skip-ssl-validation [0m
Setting api endpoint to https://api.cf.lab.local...
OK
API endpoint: https://api.cf.lab.local (API version: 2.13.0)
Not logged in. Use 'cf login' to log in.
[32m> cf auth smoke_tests ad445f38ca9bbf21933e [0m
API endpoint: https://api.cf.lab.local
Authenticating...
OK
Use 'cf target' to view or set your target org and space
[32m> cf create-quota CFSMOKETESTORGQUOTA -m 10G -r 10 -s 2 [0m
Creating quota CFSMOKETESTORGQUOTA as smoke_tests...
FAILED
i/o timeout
[32m> cf delete-space CFSMOKETEST_SPACE -f [0m
FAILED
No org targeted, use 'cf target -o ORG' to target an org.
[91m[1m• Failure [15.910 seconds][0m
[91m[1mRuntime: [BeforeEach][0m
[90m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/runtimetest.go:63[0m
can be pushed, scaled and deleted
[90m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/runtimetest.go:62[0m
[91mExpected
<int>: 1
to match exit code:
<int>: 0[0m
/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/init_test.go:59
[90m------------------------------[0m
[91m[1mSummarizing 2 Failures:[0m
[91m[1m[Fail] [0m[91m[1m[BeforeEach] Loggregator: [0m[0mcan see app messages in the logs [0m
[37m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace/src/github.com/cloudfoundry-incubator/cf-test-helpers/cf/asuser.go:39[0m
[91m[1m[Fail] [0m[91m[1m[BeforeEach] Runtime: [0m[0mcan be pushed, scaled and deleted [0m
[37m/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/smoke/init_test.go:59[0m
[1m[91mRan 2 of 2 Specs in 21.151 seconds[0m
[1m[91mFAIL![0m -- [32m[1m0 Passed[0m | [91m[1m2 Failed[0m | [33m[1m0 Pending[0m | [36m[1m0 Skipped[0m --- FAIL: TestSmokeTests (21.15 seconds)
FAIL
Ginkgo ran 1 suite in 31.489423576s
Test Suite Failed
Smoke Tests Complete; exit status: 1
[stderr]
+ which go
+ localgopath=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace
+ mkdir -p /var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace/bin
+ export GOPATH=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace:/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace:/var/vcap/packages/smoke-tests
+ export PATH=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/workspace/bin:/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry-incubator/cf-smoke-tests/Godeps/_workspace/bin:/var/vcap/packages/smoke-tests/bin:/var/vcap/packages/cli/bin:/var/vcap/data/packages/golang/aa5f90f06ada376085414bfc0c56c8cd67abba9c.1-f892239e5c78542d10f4d8f098d9b892c0b27bc1/bin:/var/vcap/packages/git/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ go install -v github.com/onsi/ginkgo/ginkgo
io
bytes
bufio
syscall
time
os
fmt
flag
github.com/onsi/ginkgo/config
go/token
strings
path/filepath
go/scanner
go/ast
path
regexp/syntax
regexp
io/ioutil
net/url
text/template/parse
text/template
go/doc
go/parser
log
go/build
text/tabwriter
go/printer
go/format
os/exec
github.com/onsi/ginkgo/ginkgo/convert
github.com/onsi/ginkgo/ginkgo/nodot
github.com/onsi/ginkgo/ginkgo/testsuite
encoding/base64
encoding/json
encoding/xml
github.com/onsi/ginkgo/types
github.com/onsi/ginkgo/reporters/stenographer
github.com/onsi/ginkgo/reporters
hash
crypto
crypto/md5
encoding/binary
net
compress/flate
hash/crc32
compress/gzip
crypto/cipher
crypto/aes
crypto/des
math/big
crypto/elliptic
crypto/ecdsa
crypto/hmac
crypto/rand
crypto/rc4
crypto/rsa
crypto/sha1
crypto/sha256
crypto/dsa
encoding/asn1
crypto/x509/pkix
encoding/hex
encoding/pem
crypto/x509
crypto/tls
mime
net/textproto
mime/multipart
net/http
github.com/onsi/ginkgo/internal/remote
github.com/onsi/ginkgo/ginkgo/testrunner
github.com/onsi/ginkgo/ginkgo/watch
os/signal
github.com/onsi/ginkgo/ginkgo
+ ginkgo -r -v -slowSpecThreshold=300
{"type": "step_finished", "id": "errands.running.cf-9b93ae0464e2a248f279.smoke-tests"}
Exited with 1.
It turns out this was a bug in Pivotal Cloud Foundry 1.3. You will only see this bug if you use a separate Deployment and Infrastructure network (as is recommended). This bug is fixed in Pivotal Cloud Foundry 1.4.
I have outlined in detail what was going on here:
http://www.feeny.org/smoke-tests-fail-pivotal-cloud-foundry-1-3-solution/
Basically, the short of it is, the smoke-tests errand is created with the Ops Manager Infrastructure network IP address in its /etc/resolv.conf. This creates an asymmetrical routing situation and results in a timeout. This can be fixed by changing the following on the Ops Mgr:
To change this behaviour in Pivotal CF v1.3.x, on the Ops Manager VM, change /home/tempest-web/tempest/app/models/tempest/manifests/network_section.rb
Line 20: "dns" => [microbosh_dns_ip] + network.parsed_dns,
to "dns" => network.parsed_dns,
then restart the tempest-web service:
sudo service tempest-web stop
sudo service tempest-web start
Now you can re-enable the smoke-tests errand and re-apply changes and all will be well!
Loved the previous question at:
500 Internal Server Error - ActionView::Template::Error in Rails Production
I get the same error when browsing the git tree via the web (internal 500), but the answer there said I should run
bundle exec rake assets:precompile
and referred me to
http://guides.rubyonrails.org/asset_pipeline.html#in-production
I am running GitLab 7.6.1 0286222 on Ubuntu 14.04 LTS fully up to date. That allows me to push and pull from local git machines fine and look around via the web service as well. I ran the revised assets:precompile as suggested below, but the problem continues for me.
So as to my specific error. In the production log I get:
git#git01:~/gitlab/log$ tail -n 20 production.log
Started GET "/chef/cheftest/tree/master/cookbooks" for 127.0.0.1 at 2014-12-24 16:03:25 -0500
Processing by Projects::TreeController#show as HTML
Parameters: {"project_id"=>"chef/cheftest", "id"=>"master/cookbooks"}
Completed 500 Internal Server Error in 490ms
ActionView::Template::Error (undefined method `[]' for nil:NilClass):
1: - tree, commit = submodule_links(submodule_item)
2: %tr{ class: "tree-item" }
3: %td.tree-item-file-name
4: %i.fa.fa-archive
app/models/repository.rb:162:in `method_missing'
app/models/repository.rb:228:in `submodule_url_for'
app/helpers/submodule_helper.rb:6:in `submodule_links'
app/views/projects/tree/_submodule_item.html.haml:1:in `_app_views_projects_tree__submodule_item_html_haml___742655240099390426_69818877669240'
app/helpers/tree_helper.rb:19:in `render_tree'
app/views/projects/tree/_tree.html.haml:42:in `_app_views_projects_tree__tree_html_haml__47884322835133800_69818822684460'
app/views/projects/tree/show.html.haml:9:in `_app_views_projects_tree_show_html_haml__1575471590709486656_69818822138660'
app/controllers/projects/tree_controller.rb:13:in `show'
I would be happy to run any commands and edit any configuration files as needed, but please let me know where the files are and how to run the commands. Thanks for your help with this.
I am using a free cloud foundry account. Today I tried pushing my Play 2.2 application but it rejects to start, the message is Unable to detect a supported application type (RuntimeError).
Deploying the app to cloud foundry is done as described in the official documentation.
Has anyone yet get this working?
Here is the full error message:
Preparing to start ***... OK
-----> Downloaded app package (38M)
/var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:94:in `build_pack': Unable to detect a supported application type (RuntimeError)
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:72:in `block in compile_with_timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:68:in `timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:71:in `compile_with_timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:53:in `block in stage_application'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:49:in `chdir'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:49:in `stage_application'
from /var/vcap/packages/dea_next/buildpacks/bin/run:10:in `<main>'
Checking status of app '***'...Application failed to stage
EDIT: I posted the issue on the official mailing list. No answer yet. But here are the steps to reproduce the issue:
create a new play 2.2 app ( play new version22 )
cd into app directory ( cd version22 )
build the project ( play dist )
push the application to cloud foundry ( cf push --path=target/universal/version22-1.0-SNAPSHOT.zip ) -- just chose the defaults
bang
I guess this is caused by the new Feature (What's new in Play 2.2?) New stage and dist tasks that changed the packaging of the app. This could cause cloud foundry problem to detect the application type.
Which cloud foundry version are you targeting on v1 or v2?
Error which you are encountering is because , cf is not having a build pack for play framework.
If you are targeting cloud foundry v2 try pushing application this way:
cf push --buildpack https://github.com/cloudfoundry/java-buildpack
After some trial and error, I got it working using the following manifest.yml to deploy on cloud foundry v2:
---
env:
JAVA_HOME: .java
applications:
- name: <APP_NAME>
memory: 512M
instances: 1
host: <AP_HOST_NAME>
domain: cfapps.io
path: <PATH_TO_ZIP_FILE>
command: ./<DIR_PACKAGE_NAME>/bin/<APP_NAME>
buildpack: https://github.com/cloudfoundry/java-buildpack
You have to fill in the info between <> for your app, and config other information as well, but the core solution is to provide the JAVA_HOME env variable, and the correct path to the start command.
Perhaps we should consider a SBT task to create this file as a permanent fix, or maybe update the java-buildpack... I'm not sure which one is the best approach.
Edit: You also will need to place a script called start in <DIR_PACKAGE_NAME>/start, or else cloud foundry will try to compile the app and fail miserably - I suppose this needs to be fixed in java-buildpack as well.
This has been confirmed as a bug. Should be fixed soon.