I am deploying a django project to GAE. When I deploy locally everything runs as smooth as silk, however when I try to deploy from Jenkins it gets hung on the following line:
12:42 PM Getting current resource limits.
Full Deploy Log
12:42 PM Application: <gaeproject>; version: dev (was: 1)
12:42 PM Host: appengine.google.com
12:42 PM Starting update of app: <gaeproject>, version: dev
12:42 PM Getting current resource limits.
Deploy Command
python $GAE_PATH/appcfg.py -A $GAE_INSTANCE -V $GAE_VERSION update .
Has anyone else encountered this issue before? What might be causing this hang?
Related
I followed this answer already. But it didn't help, also, I re-installed gcloud CLI, but now I am not able to install CLI anymore because of the following error.
Here is my output for ./google-cloud-sdk/bin/gcloud init
ERROR: Reachability Check failed.
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with httplib2 (SSLCertVerificationError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with httplib2 (SSLCertVerificationError)
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with requests (SSLError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with requests (SSLError)
Network connection problems may be due to proxy or firewall settings.
Also, I am not behind any corporate proxy.
It was working perfectly few days ago, until today.I did not changed any settings whatsoever, I didn't install any new services whatsoever.
Output for ./google-cloud-sdk/bin/gcloud info.
./google-cloud-sdk/bin/gcloud info
Google Cloud SDK [354.0.0]
Python Version: [3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08) [Clang 6.0 (clang-600.0.57)]]
Python Location: [/Users/myname/.config/gcloud/virtenv/bin/python3]
Site Packages: [Enabled]
Installation Root: [/Users/myname/Downloads/google-cloud-sdk]
Installed Components:
gsutil: [4.67]
core: [2021.08.20]
bq: [2.0.71]
System PATH: [/Users/myname/.config/gcloud/virtenv/bin:/Users/myname/Downloads/apache-maven-3.8.4/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/munki:/usr/local/opt/go/libexec/bin:/Users/myname/go/bin]
Python PATH: [/Users/myname/Downloads/./google-cloud-sdk/lib/third_party:/Users/myname/Downloads/google-cloud-sdk/lib:/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload:/Users/myname/.config/gcloud/virtenv/lib/python3.7/site-packages]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/myname/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/myname/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/myname/.config/gcloud/configurations/config_default]
Account: [None]
Project: [None]
Current Properties:
[core]
disable_usage_reporting: [True]
Logs Directory: [/Users/myname/.config/gcloud/logs]
Last Log File: [/Users/myname/.config/gcloud/logs/2022.08.10/15.35.06.807614.log]
git: [git version 2.32.0 (Apple Git-132)]
ssh: [OpenSSH_8.1p1, LibreSSL 2.7.3]
Update on this, just disable the ssl validation and everything will work.
gcloud config set auth/disable_ssl_validation True
I am working on django channels and getting problem while deploying them on google flex engine,first I was getting error of 'deployment has failed to become healthy in the allotted time' and resolved it by adding readiness_check in app.yaml,now I am getting below error:
(gcloud.app.deploy) Operation [apps/socketapp-263709/operations/65c25731-1e5a-4aa1-83e1-34955ec48c98] timed out. This operation may still be underway.
App.yaml
runtime: python
env: flex
runtime_config:
python_version: 3
instance_class: F4_HIGHMEM
handlers:
# This configures Google App Engine to serve the files in the app's
# static directory.
- url: /static
static_dir: static/
- url: /.*
script: auto
# [END django_app]
readiness_check:
check_interval_sec: 120
timeout_sec: 40
failure_threshold: 5
success_threshold: 5
app_start_timeout_sec: 1500
How can I fix this issue,any suggestions?
The following error is due to several issues:
1) You aren't configuring correctly your app.yaml file. The resource request in App Engine Flexible is not through instance_class option, to request resources, you've to use resources option like following:
resources:
cpu: 2
memory_gb: 2.3
disk_size_gb: 10
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 0.5
2) You're missing an entrypoint for your app. To deploy Django channels, they suggest to have an entrypoint for Daphne server. Add in your app.yaml the following code:
entrypoint: daphne -b 0.0.0.0 -p 8080 mysite.asgi:application
3) After doing the previous, if you still get the same error, it's possible that your In-use IP addresses quota in the region of your App Engine Flexible application has reached its limit.
To check this issue, you can go to "Activity" tab of your project home page. Their, you can see warnings of quota limits and VM's failing to be created.
App Engine by default leaves the previous versions of your App, and running that may take IP addresses.You can delete the previous versions and/or request an increase of your IP address quota limit.
Also update gcloud tools and SDK which may resolve the issue.
To check your in-use addresses click here and you will be able to increase your quota by clicking the 'Edit Quotas' button in the Cloud Console.
I'm trying to deploy a django application on elasticbeanstalk. It has been working fine then suddenly stopped and I cannot figure out why.
When I do eb deploy I get
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
INFO: New application version was deployed to running EC2 instances.
INFO: Environment update completed successfully.
Alert: An update to the EB CLI is available. Run "pip install --upgrade awsebcli" to get the latest version.
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 if ! grep -q 'WSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/wsgi.conf ; then echo -e 'WSGIApplicationGroup %{GLOBAL}' | sudo tee -a /etc/httpd/conf.d/wsgi.conf; fi;
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 sudo /etc/init.d/httpd reload
Reloading httpd: [ OK ]
When I then run eb health, I get
Incorrect application version found on all instances. Expected version
"app-c56a-190604_135423" (deployment 300).
If I eb ssh and look in /opt/python/current there is nothing there so nothing is being copied across
I think something may be wrong with .elasticbeanstalk/config.yml. Somehow the directory was deleted and setup again. This is the config.yml
branch-defaults:
master:
environment: app-prod
scoring-dev:
environment: app-dev
environment-defaults:
app-prod:
branch: null
repository: null
global:
application_name: my-app
default_ec2_keyname: am-app_011017
default_platform: arn:aws:elasticbeanstalk:us-west-2::platform/Python 2.7 running
on 64bit Amazon Linux/2.3.1
default_region: us-west-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application
Please, any ideas about how to troubleshoot?
I upgraded to the latest AWS stack for python 2.7 and that sorted it
I faced the same problem and the cause the command timeout
Default max deployment time -Command timeout- is 600 (10 minutes)
Your Environment → Configuration → Deployment preferences → Command timeout
Increase the Deployment preferences for example 1800
or upgrade the instance type to work faster
tldr:
- I'm writing logs from a python application that's running outside of Google Cloud
- I'm using gcplogs to import logs to Stackdriver
- Logs imported must be in the wrong format because they're not displayed as they should in Stackdriver logging
I'm having a hard time figuring out how to configure the gcplogs Docker logs driver so that logs appear as they should in Stackdriver logging. Here's what the logs currently look like:
It looks like stackdriver isn't parsing these logs correctly. However, if the docker image is ran in GKE, the logs look correct:
Here Stackdriver has recognized that these logs were debug messages.
Since these two applications are using the same logger, I think the message that is logged from the application is in the right format, and that it must be the gcplogs configuration that's wrong or missing something. (See this repository for what the python code looks like)
Here's the content of /etc/docker/daemon.json on the machine that's running the Docker image:
{
"log-driver": "gcplogs",
"log-opts": {
"gcp-project": "removed",
"env": "host"
}
}
The output of docker info | grep 'Logging Driver' is Logging Driver: gcplogs.
The output of docker version is:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.2
Git commit: 0520e24
Built: Wed Mar 21 23:05:52 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:14:54 2018
OS/Arch: linux/amd64
Experimental: false
Any tips on how to configure this so that the logs end up looking as they should in Stackdriver would be much appreciated.
I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!