Static buildpack deploy now failing due to unsupported stack - cloud-foundry

I'm trying to deploy an update to a simple HTML application using static files. Using the static buildpack, I've previously deployed the application with no issues.
Pushing an application update, the command fails with the following message:
----> Downloaded app package (4.0K)
Cloning into '/tmp/buildpacks/staticfile-buildpack'...
Submodule 'compile-extensions' (https://github.com/cloudfoundry-incubator/compile-extensions.git) registered for path 'compile-extensions'
FAILED
Server error, status code: 400, error code: 170004, message: App staging failed in the buildpack compile phase
Looking in the logs for the application, the platform fails staging due to an incompatibility with the stack.
ERR Cloning into '/tmp/buildpacks/staticfile-buildpack'...
OUT Submodule 'compile-extensions' (https://github.com/cloudfoundry-incubator/compile-extensions.git) registered for path 'compile-extensions'
ERR Cloning into 'compile-extensions'...
OUT Submodule path 'compile-extensions': checked out '1f260464c156bddfb654adb14298344797d030a1'
ERR It looks like you're deploying on a stack that's not supported by this buildpack.
ERR That could be because you're using a recent buildpack release on a deprecated stack.
ERR If you're using the buildpack installed by your CF admin, please let your admin know you saw this error message.
ERR If you at one point specified a buildpack that's at git URL, please make sure you're pointed at a version that supports this stack.
OUT Staging failed: Buildpack compilation step failed
ERR encountered error: App staging failed in the buildpack compile phase
How can I resolve this?

Cloud Foundry recently added support for a new stack, cflinuxfs2, based upon Ubuntu 14.04. IBM Bluemix still supports the old lucid64 build stack, which appears to be chosen by default if a stack isn't specified on the command line.
Looking at the "Static Buildpack" manifest, the old stack isn't supported.
cf_stacks:
- cflinuxfs2
You can explicitly set the application stack during deploying using the '-s' command-line parameter.
cf push -b https://github.com/cloudfoundry/staticfile-buildpack.git -s cflinuxfs2
Using the 'cflinuxfs2' will fix the issue.

For anyone else having this issue on bluemix I was actually unable to use the cflinuxfs stack on the external bluemix, but pushing with cflinuxfs2 seems to work
cf push -b https://github.com/cloudfoundry/staticfile-buildpack.git -s cflinuxfs2
Edit: Running this command on the api endpoint that I was using I got the following output, hence the reason i had to use cflinuxfs2:
> cf stacks
name description
lucid64 Ubuntu 10.04
seDEA private
cflinuxfs2 Ubuntu 14.04.2 trusty
Also: https://developer.ibm.com/answers/questions/198303/cloudfoundry-static-buildpack-not-compatible.html

Related

How to fix build error because Install pods using EAS build EXPO?

I'm trying to build an IOS app using EAS service expo, but when I try to build, there is an error while install pods on build details expo. And this is the error.
Unable to find a specification for `UMTaskManagerInterface` depended upon by `EXLocation`
I try to install pod by npm i pod-install but still get error. Is this because I build on Windows, or what should I do to fix this error? I also try to find the error in GitHub forum, and it's say adding path pod in ios/Podfile which I can't find it in my expo project. Where is ios/Podfile file located in expo project?
This is the full error
Installing pods
Using Expo modules
Auto-linking React Native modules for target `MMS`: RNCAsyncStorage, RNCCheckbox, RNDateTimePicker, RNGestureHandler, RNPermissions, RNReanimated, RNScreens, react-native-safe-area-context, and react-native-viewpager
Analyzing dependencies
Fetching podspec for `DoubleConversion` from `../node_modules/react-native/third-party-podspecs/DoubleConversion.podspec`
Fetching podspec for `RCT-Folly` from `../node_modules/react-native/third-party-podspecs/RCT-Folly.podspec`
Fetching podspec for `glog` from `../node_modules/react-native/third-party-podspecs/glog.podspec`
Adding spec repo `trunk` with CDN `https://cdn.cocoapods.org/`
CocoaPods 1.11.2 is available.
To update use: `sudo gem install cocoapods`
For more information, see https://blog.cocoapods.org and the CHANGELOG for this version at https://github.com/CocoaPods/CocoaPods/releases/tag/1.11.2
[!] Unable to find a specification for `UMTaskManagerInterface` depended upon by `EXLocation`
You have either:
* out-of-date source repos which you can update with `pod repo update` or with `pod install --repo-update`.
* mistyped the name or version.
* not added the source repo that hosts the Podspec to your Podfile.
[stderr] [!] `<PBXResourcesBuildPhase UUID=`13B07F8E1A680F5B00A75B9A`>` attempted to initialize an object with an unknown UUID. `2EE81B3C866A4A13B6460929` for attribute: `files`. This can be the result of a merge and the unknown UUID is being discarded.
pod exited with non-zero code: 1
Edit*
i try using expo build:ios-> archive is working perfectly. why using eas build -p ios i got that error?

How to invoke AWS SAM locally using remote docker (as opposed to docker desktop)?

I have AWS SAM installed on a Windows machine. I have followed the instructions here https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html to create a test Hello World application.
I have docker server running on a separate (Linux) VM. How do I invoke AWS SAM locally?
I have tried the following:
sam local start-api --container-host-interface 0.0.0.0 --container-host 192.168.28.168
where 192.168.28.168 is the Linux VM where docker server is running. (I.e. different to the Windows machine I’m developing on).
However, I get “Error: Cannot find module”:
PS C:\Develop\AWS\sam-app> sam local start-api --container-host-interface 0.0.0.0 --container-host 192.168.28.168
Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2021-09-24 07:50:10 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
Invoking app.lambdaHandler (nodejs14.x)
Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-nodejs14.x:rapid-1.27.2.
Mounting C:\Develop\AWS\sam-app\.aws-sam\build\HelloWorldFunction as /var/task:ro,delegated inside runtime container
START RequestId: bd6b8177-56bb-4464-8ead-8c46809e6c6c Version: $LATEST
2021-09-24T06:50:35.674Z undefined ERROR Uncaught Exception {"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'app'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'app'","Require stack:","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)"," at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)"," at Object.<anonymous> (/var/runtime/index.js:43:30)"," at Module._compile (internal/modules/cjs/loader.js:1085:14)"," at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)"," at Module.load (internal/modules/cjs/loader.js:950:32)"," at Function.Module._load (internal/modules/cjs/loader.js:790:14)"," at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:76:12)"," at internal/main/run_main_module.js:17:47"]}
time="2021-09-24T06:50:35.691" level=panic msg="ReplyStream not available"
SAM is communicating with the container ok, as evidenced by the START RequestId:… line. However, it’s failing to find the app.js to run.
I suspect it’s something to do with volume mapping.
I’ve tried setting --docker-volume-basedir to various values, but it seems to make no difference.
The “Remote Docker” section on this page https://github.com/thoeni/aws-sam-local#remote-docker suggests that “the project directory must be pre-mounted on the remote host where the Docker is running”. But how do I do that, when I’m not using docker desktop?
There are some similar sounding suggestions here https://github.com/aws/aws-sam-cli/issues/2837#issuecomment-879655277 which seem to involve modifying the dockerfile to mount a volume. However, I don’t have a dockerfile – SAM is just pulling the image automatically when invoked.
Any ideas? Is it even possible to invoke AWS Sam locally using a remote docker server as opposed to docker desktop?
The section “Step 3: Install Docker (optional)” of the SAM install guide https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-windows.html describes setting up shared drives: “The AWS SAM CLI requires that the project directory, or any parent directory, is listed in a shared drive.” However, it’s evident that it’s expecting Docker Desktop, not docker running on a remote server.
Maybe it’s just not possible to invoke AWS SAM locally without Docker Desktop?
Ok, I've now realised where I went wrong.
At this point in the SAM log:
Mounting C:\Develop\AWS\sam-app\.aws-sam\build\HelloWorldFunction as /var/task:ro,delegated inside runtime container
AWS SAM is attempting to bind mount the C:\Develop\AWS\... directory on the Docker host to /var/task in the Docker container.
My mistake was thinking that it was mounting the actual directory on my local development machine.
I logged into the Docker host machine, and could see the directory structure had been created: /c/Develop/AWS/.... I transferred app.js from my local development machine to the Docker host's directory, and bingo - it now works. :-)
So, now the description in the AWS SAM developer guide for the --docker-volume-basedirmakes more sense:
The location of the base directory where the AWS SAM file exists. If Docker is running on a remote machine, you must mount the path where the AWS SAM file exists on the Docker machine, and modify this value to match the remote machine.
So I guess I need to create an SMB mapping from the application folder on my Windows development machine to a folder on the Linux Docker host, and ensure that the Docker host (Linux) folder gets used for running the application by setting --docker-volume-basedir accordingly.

Logstash Google Pubsub Input Plugin fails to load file and pull messages

I'm getting this error when trying to run Logstash pipeline with a configuration that is using google_pubsub on a docker container running in my production env:
2021-09-16 19:13:25 FATAL runner:135 - The given configuration is invalid. Reason: Unable to configure plugins: (PluginLoadingError) Couldn't find any input plugin named 'google_pubsub'. Are you sure this is correct? Trying to load the google_pubsub input plugin resulted in this error: Problems loading the requested plugin named google_pubsub of type input. Error: RuntimeError
you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command
no such file to load -- com/google/cloud/google-cloud-pubsub/1.37.1/google-cloud-pubsub-1.37.1 (LoadError)
2021-09-16 19:13:25 ERROR Logstash:96 - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This seems to randomly happen when re-installing the plugin. I thought it's a proxy issue but I have the google domain enabled in the whitelist. Might be the wrong one / missing something. Still, doesn't explain the random failures.
Also, when I run the pipeline in my machine I get GCP events, but when I do it on a VM - no Pubsub messages are being pulled. Could it be a firewall rule blocking them?
The error message suggests there is a problem in loading the ‘google_pubsub’ input plugin. This error generally occurs when the input Pub/Sub plugin is not installed properly. Kindly ensure that you are installing the Logstash Plugin for Pub/Sub correctly.
For example, installing Logstash Plugin for Pub/Sub in a VM :
sudo -u root sudo -u logstash bin/logstash-plugin install logstash-input-google_pubsub
For a detailed demo refer to this community tutorial.

AWS: ERROR: Pre-processing of application version xxx has failed and Some application versions failed to process. Unable to continue deployment

Hi I am trying to deploy a node application from cloud 9 to ELB but I keep getting the below error.
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed --- ERROR: Pre-processing of application version app-491a-200623_151654 has
failed. ERROR: Some application versions failed to process. Unable to
continue deployment.
I have attached an image of the IAM roles that I have. Any solutions?
Go to your console and open up your elastic beanstalk console. Go to both applications and environments and delete them. Then in your terminal hit
eb init #Follow instructions
eb create --single ##Follow instructions.
It would fix the error, which is due to some application states which are failed. If you want to check those do
aws elasticbeanstalk describe-application-versions
I was searching for this answer as a result of watching a YouTube tutorial for how to pass the AWS Certified Developer Associate exam. If anyone else gets this error as a result of that tutorial, delete the 002_node_command.config file created in the tutorial and commit that change, as that is causing the error to occur.
A failure within the pre-processing phase, may be caused by an invalid manifest, configuration or .ebextensions file.
If you deploy an (invalid) application version using eb deploy and you enable the preprocess option, The details of the error will not be revealed.
You can remove the --process flag and enable the verbose option to improve error output.
in my case I deploy using this command:
eb deploy -l "XXX" -p
And can return a failure when I mess around with .ebextensions:
ERROR: Pre-processing of application version xxx has failed.
ERROR: Some application versions failed to process. Unable to continue deployment.
With that result I can't figure up what is wrong,
but deploying without -p (or --process)and adding -v (verbose) flag:
eb deploy -l "$deployname" -v
It returns something more useful:
Uploading: [##################################################] 100% Done...
INFO: Creating AppVersion xxx
ERROR: InvalidParameterValueError - The configuration file .ebextensions/16-my_custom_config_file.config in application version xxx contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key
in 'reader', line 6, column 1:
(... details of the error ...)
, JSON exception: Invalid JSON: Unexpected character (#) at position 0.. Update the configuration file.
Now I can fix the problem.

VMC cannot detect application type when publishing Play Framework 2.2 app to CloudFoundry

I am using a free cloud foundry account. Today I tried pushing my Play 2.2 application but it rejects to start, the message is Unable to detect a supported application type (RuntimeError).
Deploying the app to cloud foundry is done as described in the official documentation.
Has anyone yet get this working?
Here is the full error message:
Preparing to start ***... OK
-----> Downloaded app package (38M)
/var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:94:in `build_pack': Unable to detect a supported application type (RuntimeError)
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:72:in `block in compile_with_timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:68:in `timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:71:in `compile_with_timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:53:in `block in stage_application'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:49:in `chdir'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:49:in `stage_application'
from /var/vcap/packages/dea_next/buildpacks/bin/run:10:in `<main>'
Checking status of app '***'...Application failed to stage
EDIT: I posted the issue on the official mailing list. No answer yet. But here are the steps to reproduce the issue:
create a new play 2.2 app ( play new version22 )
cd into app directory ( cd version22 )
build the project ( play dist )
push the application to cloud foundry ( cf push --path=target/universal/version22-1.0-SNAPSHOT.zip ) -- just chose the defaults
bang
I guess this is caused by the new Feature (What's new in Play 2.2?) New stage and dist tasks that changed the packaging of the app. This could cause cloud foundry problem to detect the application type.
Which cloud foundry version are you targeting on v1 or v2?
Error which you are encountering is because , cf is not having a build pack for play framework.
If you are targeting cloud foundry v2 try pushing application this way:
cf push --buildpack https://github.com/cloudfoundry/java-buildpack
After some trial and error, I got it working using the following manifest.yml to deploy on cloud foundry v2:
---
env:
JAVA_HOME: .java
applications:
- name: <APP_NAME>
memory: 512M
instances: 1
host: <AP_HOST_NAME>
domain: cfapps.io
path: <PATH_TO_ZIP_FILE>
command: ./<DIR_PACKAGE_NAME>/bin/<APP_NAME>
buildpack: https://github.com/cloudfoundry/java-buildpack
You have to fill in the info between <> for your app, and config other information as well, but the core solution is to provide the JAVA_HOME env variable, and the correct path to the start command.
Perhaps we should consider a SBT task to create this file as a permanent fix, or maybe update the java-buildpack... I'm not sure which one is the best approach.
Edit: You also will need to place a script called start in <DIR_PACKAGE_NAME>/start, or else cloud foundry will try to compile the app and fail miserably - I suppose this needs to be fixed in java-buildpack as well.
This has been confirmed as a bug. Should be fixed soon.