Note, this works perfectly when testing locally with Functions Framework.
I just deployed a function
Deploying function...
gcloud functions deploy quantumjs-api --runtime nodejs10 --trigger-http --project qunatumvue --region europe-west2
Deploying function (may take a while - up to 2 minutes)...done.
availableMemoryMb: 256
entryPoint: quantumjs-api
environmentVariables:
location: production
httpsTrigger:
url: https://europe-west2-qunatumvue.cloudfunctions.net/quantumjs-api
labels:
deployment-tool: cli-gcloud
name: projects/qunatumvue/locations/europe-west2/functions/quantumjs-api
runtime: nodejs10
Edit ---- thanks Doug Stevenson for the ping pointer
However, when posting data to it, I get no response back, just this error:
"Error: Network Error
at createError (webpack-internal:///./node_modules/axios/lib/core/createError.js:16:15)
at XMLHttpRequest.handleError (webpack-internal:///./node_modules/axios/lib/adapters/xhr.js:87:14)"
You can't ping a URL. You ping a hostname. The hostname in the URL you've given is "europe-west2-qunatumvue.cloudfunctions.net". When I ping that, it's fine:
user#host 18:26 $ ping europe-west2-qunatumvue.cloudfunctions.net
PING www3.l.google.com (173.194.202.138) 56(84) bytes of data.
64 bytes from pf-in-f138.1e100.net (173.194.202.138): icmp_seq=1 ttl=42 time=29.3 ms
64 bytes from pf-in-f138.1e100.net (173.194.202.138): icmp_seq=2 ttl=42 time=29.3 ms
64 bytes from pf-in-f138.1e100.net (173.194.202.138): icmp_seq=3 ttl=42 time=29.3 ms
If you want to check if the URL works, you should instead access it with curl or some HTTP library.
I figured out my issue, I had to redirect all traffic to https has thats what my domain stared with in the cors policy file
Related
I have an in-vpc codebuild which is set up using an ELB as a proxy server(For limited internet access). The buildspec of that codebuild is referencing a parameter from the parameter store. However, when the build is run, it fails with
Decrypted Variables Error: RequestError: send request failed caused by: Post https://ssm.ap-northeast-1.amazonaws.com/: dial tcp 52.179.283.42:443: i/o timeout
The proxy server has access to all amazonaws.com endpoints, all HTTP_PROXY variables setup properly, and in the build spec I have also mentioned the proxy settings. (upload logs/artifcats - true). Not sure how to fix this issue, or if it is allowed to access SSM parameters from invpc codebuild?
Can you try adding the env variables in your buildspec in small caps as well which the golang programs are genrally looking for:
env:
variables:
HTTP_PROXY: "http://<proxy_server_hostname>:9480"
HTTPS_PROXY: "http://<proxy_server_hostname>:9480"
NO_PROXY: "169.254.169.254,169.254.170.2"
http_proxy: "http://<proxy_server_hostname>:9480"
https_proxy: "http://<proxy_server_hostname>:9480"
no_proxy: "169.254.169.254,169.254.170.2"
build:
commands:
- curl -v https://ssm.ap-northeast-1.amazonaws.com
I've been using Cloud Run for a while and the entire user experience is simply amazing!
Currently I'm using Cloud Build to deploy the container image, push the image to GCR, then create a new Cloud Run revision.
Now I want to call a script to purge caches from CDN after the latest revision is successfully deployed to Cloud Run, however $ gcloud run deploy command can't tell you if the traffic is started to pointing to the latest revision.
Is there any command or the event that I can subscribe to to make sure no traffic is pointing to the old revision, so that I can safely purge all caches?
#Dustin’s answer is correct, however "status" messages are an indirect result of Route configuration, as those things are updated separately (and you might see a few seconds of delay between them). The status message will still be able to tell you the Revision has been taken out of rotation if you don't mind this.
To answer this specific question (emphasis mine) using API objects directly:
Is there any command or the event that I can subscribe to to make sure no traffic is pointing to the old revision?
You need to look at Route objects on the API. This is a Knative API (it's available on Cloud Run) but it doesn't have a gcloud command: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.routes
For example, assume you did 50%-50% traffic split on your Cloud Run service. When you do this, you’ll find your Service object (which you can see on Cloud Console → Cloud Run → YAML tab) has the following spec.traffic field:
spec:
traffic:
- revisionName: hello-00002-mob
percent: 50
- revisionName: hello-00001-vat
percent: 50
This is "desired configuration" but it actually might not reflect the status definitively. Changing this field will go and update Route object –which decides how the traffic is splitted.
To see the Route object under the covers (sadly I'll have to use curl here because no gcloud command for this:)
TOKEN="$(gcloud auth print-access-token)"
curl -vH "Authorization: Bearer $TOKEN" \
https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/GCP_PROJECT/routes/SERVICE_NAME
This command will show you the output:
"spec": {
"traffic": [
{
"revisionName": "hello-00002-mob",
"percent": 50
},
{
"revisionName": "hello-00001-vat",
"percent": 50
}
]
},
(which you might notice is identical with Service’s spec.traffic –because it's copied from there) that can tell you definitively which revisions are currently serving traffic for that particular Service.
You can use gcloud run revisions list to get a list of all revisions:
$ gcloud run revisions list --service helloworld
REVISION ACTIVE SERVICE DEPLOYED DEPLOYED BY
✔ helloworld-00009 yes helloworld 2019-08-17 02:09:01 UTC email#email.com
✔ helloworld-00008 helloworld 2019-08-17 01:59:38 UTC email#email.com
✔ helloworld-00007 helloworld 2019-08-13 22:58:18 UTC email#email.com
✔ helloworld-00006 helloworld 2019-08-13 22:51:18 UTC email#email.com
✔ helloworld-00005 helloworld 2019-08-13 22:46:14 UTC email#email.com
✔ helloworld-00004 helloworld 2019-08-13 22:41:44 UTC email#email.com
✔ helloworld-00003 helloworld 2019-08-13 22:39:16 UTC email#email.com
✔ helloworld-00002 helloworld 2019-08-13 22:36:06 UTC email#email.com
✔ helloworld-00001 helloworld 2019-08-13 22:30:03 UTC email#email.com
You can also use gcloud run revisions describe to get details about a specific revision, which will contain a status field. For example, an active revision:
$ gcloud run revisions describe helloworld-00009
...
status:
conditions:
- lastTransitionTime: '2019-08-17T02:09:07.871Z'
status: 'True'
type: Ready
- lastTransitionTime: '2019-08-17T02:09:14.027Z'
status: 'True'
type: Active
- lastTransitionTime: '2019-08-17T02:09:07.871Z'
status: 'True'
type: ContainerHealthy
- lastTransitionTime: '2019-08-17T02:09:05.483Z'
status: 'True'
type: ResourcesAvailable
And an inactive revision:
$ gcloud run revisions describe helloworld-00008
...
status:
conditions:
- lastTransitionTime: '2019-08-17T01:59:45.713Z'
status: 'True'
type: Ready
- lastTransitionTime: '2019-08-17T02:39:46.975Z'
message: Revision retired.
reason: Retired
status: 'False'
type: Active
- lastTransitionTime: '2019-08-17T01:59:45.713Z'
status: 'True'
type: ContainerHealthy
- lastTransitionTime: '2019-08-17T01:59:43.142Z'
status: 'True'
type: ResourcesAvailable
You'll specifically want to check the type: Active condition.
This is all available via the Cloud Run REST API as well: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.revisions
By default, the traffic is routed to the latest revision. You can see this into the logs.
Deploying container to Cloud Run service [SERVICE_NAME] in project [YOUR_PROJECT] region [YOUR_REGION]
✓ Deploying... Done.
✓ Creating Revision...
✓ Routing traffic...
Done.
Service [SERVICE_NAME] revision [SERVICE_NAME-00012-yic] has been deployed and is serving 100 percent of traffic at https://SERVICE_NAME-vqg64v3fcq-uc.a.run.app
If you want to be sure, you can explicitly call the update traffic command
gcloud run services update-traffic --platform=managed --region=YOUR_REGION --to-latest YOUR_SERVICE
I am currently successfully using Ansible to run tasks on hosts that are in a private subnet in AWS, which the below group_vars is setting up:
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ec2-user#bastionhost#example.com"'
This is working fine.
For Windows instances not in a private subnet the following group_vars works:
---
ansible_user: "AnsibleUser"
ansible_password: "Password"
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Now, trying to get Ansible to deploy to a Windows server behind the bastion by just using the ProxyCommand won't work - which I understand.
I believe though that there is a new protocol/module I can use called psrp.
I imagine that my group_vars for my Windows hosts needs to change to something like this:
---
ansible_user: "AnsibleUser"
ansible_password: "Password"
ansible_port: 5986
ansible_connection: psrp
ansible_psrp_cert_validation: ignore
If I run with just the above changes against instances that are publicly available (and not trying to connect via a bastion), my task seems to work fine:
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/win_shell.ps1
<10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14
PSRP: EXEC (via pipeline wrapper)
I know there must be more changes before I can try this on a windows server behind a bastion, but ran it anyway to see what errors I get to give me clues on what to do next. Here is the result when running this on an instance behind a bastion server:
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/setup.ps1
<10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14
The full traceback is:
.
.
.
.
ConnectTimeout: HTTPSConnectionPool(host='10.100.11.14', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x110bbfbd0>, 'Connection to 10.100.11.14 timed out. (connect timeout=30)'))
It seems like Ansible is ignoring my group_vars for the ProxyCommand - which I'm not sure if that's expected.
I'm also not sure on what the next steps are to enable Ansible to deploy to Windows servers behind a bastion.
What config am I missing?
The doc says, the ansible_ssh_common_args setting is appended to sftp, scp, and ssh commands. So it sounds normal to me that is not taking into account when using winrm or psrp ansible_connection.
As explained in the link provided by Pouyan in the comments, ansible_psrp_proxy variable will be used to provide proxy information.
ansible_connection: psrp
ansible_psrp_proxy=socks5h://localhost:1234
More info on the creation of the socks proxy can be found on: https://www.bloggingforlogging.com/2018/10/14/windows-host-through-ssh-bastion-on-ansible/
I'm experimenting with CF in my local bosh-lite setup.
The apps that I deploy into if work well. I am now trying to follow the steps here
https://github.com/cf-platform-eng/cf-community-workshop/blob/master/demos/service-broker-lab.adoc
to try out the custom service broker setup.
The https://github.com/mstine/haash-broker application starts and is running fine:
$ cf apps
name requested state instances memory disk urls
haash-broker started 1/1 768M 1G haash-broker.vbox.mojito, haash-broker.192.168.50.6.xip.io
I can access it from my host machine browser well:
http://haash-broker.192.168.50.6.xip.io/v2/catalog
But when I execute the
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
I get
$ cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
Creating service broker haash-broker as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker could not be reached: http://haash-broker.192.168.50.6.xip.io/v2/catalog
When I log in into the CC VM:
$ bosh -e vbox -f cf ssh api/eb4cec99-bab1-4513-a980-fb92775ac2d8
I can ping the hostname:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ sudo ping haash-broker.192.168.50.6.xip.io
PING haash-broker.192.168.50.6.xip.io (192.168.50.6) 56(84) bytes of data.
64 bytes from 192.168.50.6: icmp_seq=1 ttl=64 time=0.080 ms
But wget connection gets refused:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ wget http://warreng:natedogg#haash-broker.192.168.50.6.xip.io/v2/catalog
--2018-04-06 04:19:05-- http://warreng:*password*#haash-broker.192.168.50.6.xip.io/v2/catalog
Resolving haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)... 192.168.50.6
Connecting to haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)|192.168.50.6|:80... failed: Connection refused.
The firewall permits everything on that VM (sudo iptables -L).
The hostname gets resolved properly. The ping works and the 80 port is open on the target IP, since I can reach it from my host browser.
How can that be that the wget doesn't work in such situation?
This also seems to be the reason for me failing to create a service broker cf create-service-broker
UPDATE
I've managed to to execute the cf create-service-broker command with URL of an nginx reverse proxy running outside of my bosh-lite environment. The proxy redirects to the same initial URL http://haash-broker.192.168.50.6.xip.io
and the command succeeds in this way.
But the subsequent
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.1.xip.io:9999
cf enable-service-access haash
cf create-service HaaSh basic my-hash
(where haash-broker.192.168.50.1.xip.io:9999 is my nginx proxy) fails with
Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://haash-broker.192.168.50.1.xip.io:9999/v2/service_instances/4ef19154-d238-4cb3-8003-803fba53af3f?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {"timestamp":1523008856993,"error":"Bad Request","status":400,"message":""}
I can see in both nginx and broker app logs that the the request reaches the broker and it answers with 400.
Debugging now why.
Can you post the result of --server-response option used with wget? Also what happens when you try to curl the broker?
Broker requires credentials, but it is interesting if it responds with 401 or 500 on the first request that wget makes without credentials.
So i created a new EC2 Instance and installed docker on it.
I deployed code from ( https://github.com/commonsearch/cosr-front/blob/master/INSTALL.md ) and followed install instructions.
Install was successfull and i started the server:
[ec2-user#ip-172-30-0-127 cosr-front]$ make docker_devserver
docker run -e DOCKER_HOST --rm -v "/home/ec2-user/cosr-front:/go/src/github.com/commonsearch/cosr-front:rw" -w /go/src/github.com/commonsearch/cosr-front -p 9700:9700 -i -t commonsearch/local-front make devserver
mkdir -p build
go build -o build/cosr-front.bin ./server
GODEBUG=gctrace=1 COSR_DEBUG=1 ./build/cosr-front.bin
2016/05/28 16:32:38 Using Docker host IP: 172.17.0.1
2016/05/28 16:32:38 Server listening on 127.0.0.1:9700 - You should open http://127.0.0.1:9700 in your browser!
Well, now when i want to access it from outside, i cant! Not even curl the local server.
When i run docker ps it gives me correct port forwarding:
[ec2-user#ip-172-30-0-127 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a9f77e1eeb1 commonsearch/local-front "make devserver" 4 minutes ago Up 4 minutes 0.0.0.0:9700->9700/tcp stoic_hopper
9ff00fe3e70d commonsearch/local-elasticsearch-devindex "/docker-entrypoint.s" 4 minutes ago Up 4 minutes 0.0.0.0:39200->9200/tcp, 0.0.0.0:39300->9300/tcp kickass_wilson
These are my docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 3e205118cd3f 17 minutes ago 853.3 MB
<none> <none> 1d233da1fa59 2 hours ago 955.7 MB
debian jessie ce58426c830c 4 days ago 125.1 MB
commonsearch/local-front latest 30de7ab48d43 7 weeks ago 1.024 GB
commonsearch/local-elasticsearch-devindex latest b1156ada5a24 11 weeks ago 383.2 MB
commonsearch/local-elasticsearch latest 808e72f49b4a 3 months ago 355.2 MB
I have tryed disabling ipv6 and all kind of nonsense google offered me, but without success.
Any ideas ?
EDIT:
Also, if i enter the docker container for frontend( using docker exec ), then I CAN PING AND CULR the frontend.
But i cant from the outside( nor ssh, not from my home pc using browser ).
Also my docker version:
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
I made a issue at github as swell and one guy saved the day.
Here's he's response:
Server listening on 127.0.0.1:9700
Your application is listening on localhost. localhost is scoped to the container itself. Thus to be able to connect to it, you would have to be inside the container.
To fix, you need to get your application to listen on 0.0.0.0 instead.
127.0.0.1 is the loopback address for the local (EC2) instance. I just recreated your problem following the same instructions and setting up the server in a docker container on an EC2 instance.
If you open another ssh session to your EC2 instance you CAN curl the loopback address, which just spits out the HTML shown below.
<!DOCTYPE html><html lang="en"><head><title>
Common Search
</title><meta content="/apple-touch-icon-precomposed.png" itemprop="image"><link href="/favicon.ico" rel="shortcut icon"><!-- CSS: This will be replaced in templates.go:preprocessTemplate() by the inline, compiled CSS
if the file build/static/css/index.css exists --><link rel="stylesheet" href="/css/global.css"/><link rel="stylesheet" href="/css/header.css"/><link rel="stylesheet" href="/css/footer.css"/><link rel="stylesheet" href="/css/hits.css"/><link rel="stylesheet" href="/css/responsive.css"/><!-- ENDCSS --><meta name="viewport" content="width=device-width, initial-scale=1"></head><body class="full"><header id="h"><div class="about">About</div><form id="f" action="/" method="GET" data-init="{"q":"","p":1,"g":""}">Common Search<div id="w"><div id="qw"><input id="q" name="q" type="text" size="60" value="" autofocus tabindex="3"/></div><span id="g"><select name="g" tabindex="4"><option value="ar">AR</option><option value="de">DE</option><option selected value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ja">JA</option><option value="ko">KO</option><option value="nl">NL</option><option value="pl">PL</option><option value="pt">PT</option><option value="ru">RU</option><option value="vi">VI</option><option value="zh">ZH</option><option value="all">ALL</option></select></span><input id="s" type="submit" value="🔍" tabindex="5"/></div></form></header><div id="hits"></div><div id="dbg"></div><div id="pager" data-page="1"></div><script src="/js/index.js" type="text/javascript"></script></body></html>
However I doubt this is what you actually want..
If you want to be able to access the hosted server from your (or any other) computer you need to edit the security group for your EC2 instance.
From the nav bar on the left side of the AWS console, select Network & Security -> Security Groups. Select the security group that applies to your current EC2 instance (assuming you made it with the launch wizard, it will have a name like: 'launch-wizard-1 created 2016-05-28T12:57:23.487-04:00'). In the lower half of the console, select the Inbound tab. Edit a new rule to allow TCP on port 9700 from any (or a specific range of) IP(s). The resulting entry should look something like this:
My TCP rule is set up to allow inbound traffic from ANY IP address on that port, you may want to configure it differently for security purposes.
Once the rule is set up, you should be able to access the web server at the public IP of your EC2 instance (which can be found on the Instances page of the AWS console). The address you should access should be :9700
Hope this helps!