Is there any way for a Digital Ocean droplet to ask "Who am I"? - digital-ocean

I'm just trying to definitively get the server ID & Region from a DO droplet. Does anyone know if this information is exposed either on the server or via an 'whoami' API call?

DigitalOcean has a metadata API that exposes this information. This article gives a brief introduction, and there is full API documentation.
You can see a list of available endpoints by running:
$ curl http://169.254.169.254/metadata/v1/
id
hostname
user-data
vendor-data
public-keys
region
interfaces/
Or hit the individual endpoints to retrieve info on the droplet:
$ curl http://169.254.169.254/metadata/v1/id
6161946
$ curl http://169.254.169.254/metadata/v1/region
nyc3
$ curl http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address
45.56.177.245

Related

GCP - get url dynamically of cloud run instances

I have some cloud run that make http requests between them, the url is hardcoded in the code, is there a way to resolve the url by the cloud run name or another attribute?
Another possible solution could be using Method: namespaces.services.get.
If the service name is known to you, you can make a GET HTTP request in API calls to https://{endpoint}/apis/serving.knative.dev/v1/{name} where endpoint is one of the supported endpoints and name is the name of the Cloud Run service to retrieve. For Cloud Run (fully managed), replace {namespace_id} with the project ID or number. It takes the form namespaces/{namespace}/services/{service}.
Authorization requires the following IAM permission on the specified resource name : run.services.get
For example :
curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/your-project/services/your-service| grep url
Output :
"url" :"https://cloud-run-xxxxxxxxxx-uc.a.run.app"
There is a gcloud command to do so. You could for instance get the url during your build and save it into an environment variable. The following command will get the complete url:
gcloud run services describe YOUR_CLOUDRUN_NAME --region=INSTANCE_REGION --platform=managed --format=yaml | grep -m 1 url | awk '{print $NF}'
There is no easy way for now (but Cloud Next 21 is coming, maybe great announcement on that; it's a feature requested by many Alpha tester like me).
However, you can implement a bunch of API calls to achieve that. I wrote an article where I use that to get the current Cloud Run service URL. But it could be another service.
It's in Golang. Have a look on it, and let me know if you have issues to translate the calls in your preferred language.
You can:
gcloud run services ${NAME} \
--platform=managed \
--region=${REGION} \
--project=${PROJECT} \
--format="value(status.address.url)")

Google Cloud Shell: How to find your web preview URL

When using Google Cloud Shell on the Google Cloud Platform console, clicking on the "web preview" button will redirect you to a URL that is serving your app on port 8080.
So, for example, the URL for your instance may be something like:
https://8080-1234abcd-abcd-1234-abcd-1234abcd.europe-west1.cloudshell.dev/?authuser=0
Is there a way to determine what this URL is going to be from the terminal, without having to click on the "web preview" button?
Note: For those wondering what the use case for this is. I am using the SSH cloud shell access feature that allows you to remote into your cloud shell instance via SSH from any terminal emulator.
Unfortunately, doing so means that you no longer have access to the "web preview" button (as you are using your own terminal and not the web based one) and so are unable to know what the URL for your web preview is going to be located.
You can determine what the URL will be from the terminal with the environment variable WEB_HOST, which is preconfigured in Cloud Shell.
The formatted preview URL will look like this:
https://$PORT-$WEB_HOST
See Preview web apps Docs
Cloud Shell is a tool that is primarily intended to provide an environment for managing your GCP resources and is not intended to be used as a development environment (but can be used to test code snippets)
Access to the Cloud Shell from the command line (via ssh on a non-web terminal) is in alpha version, as mentioned in this document.
This functionality can be super useful, but at the moment it is not possible to generate the preview url outside the GCP console (web UI), my recommendation is to open a feature request to allow the creation of a preview URL outside the web terminal.
To create a feature request, you must fill a public issue tracker on this page.
As a workaround, you can generate preview URLs on any Cloud or on-premise server by using a free account of ngrok, this is a software that allows you to generate ephemeral HTTPs URLs pointing to your localhost services on any port (demo endpoints) in the same way as Cloud Shell web preview works.
the rule to generate url hostname is uncertainty,sometimes its [port]-[guid]-[region].cloudshell... ,while sometimes it has a fixed name [port]-[host]-... ,the "host" is the hostname of yr cloud shell vm
Is there a way to determine what this URL is going to be from the
terminal, without having to click on the "web preview" button?
Yes, it is possible.
Eg:
PORT="8080"
cd `mktemp -d` \
&& echo '<html><body>Hello World</body></html>' >./index.html \
&& python -m SimpleHTTPServer $PORT &
AUTHUSER="0"
ADDPATH=""
echo "https://shell.cloud.google.com/devshell/proxy?authuser=$AUTHUSER&port=$PORT&environment_id=default$ADDPATH"
After click on link, the proxy endpoint will authenticate the GCP user and redirect to final destination (eg https://8080-cs-215311858653-default.cs-us-east1-vpcf.cloudshell.dev/?authuser=0).
Although it is possible to generate the final URL with the following commands, it is not a good idea to use it as the port is only accessible after the proxy endpoint authenticates the user.
ZONE=$(curl -s -H "Metadata-Flavor: Google" metadata/computeMetadata/v1/instance/zone)
ZONE="${ZONE##*/}"
REGION=${ZONE%-*}
MACHINE=$(hostname)
MACHINE="${MACHINE%-default*}-default"
PORT="8080"
echo "https://${PORT}-${MACHINE}.cs-${REGION}-vpcf.cloudshell.dev/?authuser=0"

CKAN Data Set Errors

I installed CKAN and I am having difficulty with adding the DataStore extension using as a guide Setting Up the DataStore from the latest CKAN docs. When I get to the line
curl -X GET "http://127.0.0.1:5000/api/3/action/datastore_search?resource_id=_table_metadata", I get this reponse: curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused.
When I look at a dataset I created through the CKAN instance through my browser, the data preview on my JSON file shows an error:
Dataset Error Screenshot
and trying to click the upper link to download the file directly also gives me a browser error when it goes to the URL:
Browser Data Download Error
I'm not sure what my next steps should be to figure out what's wrong but I think the FileStore is working since I was able to upload a picture and load it for an Organization listing.
The installation is fresh and has all the default settings from the installation guide so I haven't done any special modifications. Thanks for your help in advance.
Because k-nut's suggestion was the answer but it's in a comment to my question, I thought I'd post an official answer in case anyone else has the same problem. The ckan.site_url needs to be set to the specific URL that CKAN is running under which may not necessarily be a generic one, even if everything else is default configured. In my case, I have a specific internal URL for my VM that I needed to set.
For me ckan.site_url was set to http://demo.ckan.org and http://localhost
took me to the CKAN page as specified in the installation tutorial, then I figured the port used was 8080 and not 5000 by going to http://localhost:8080.
So, I ended up using curl -X GET "http://127.0.0.1:8080/api/3/action/datastore_search?resource_id=_table_metadata" url instead.

Create Amazon Cloudfront distribution in a script (Windows)

Is there anywhere a decent full example of creating a distribution (ideally with more than one origin - S3 and an AMS) from the command line? I was a bit dismayed to find that it isn't a case of "aws cloudfront blah blah..."
In Windows, assuming no special tools - though any solution that needs a standalone exe is fine. I have been hinted to use cURL... but can't figure out all the stuff I need to pass in, or indeed how to use cURL to do so - have found it has a -h param for headers...but never used cURL so a bit lost.
Looked http://docs.aws.amazon.com/AmazonCloudFront/latest/APIReference/CreateDistribution.html but am bemused by the sketchiness of the 'example' e.g.
POST /2013-09-27/distribution HTTP/1.1
Host: cloudfront.amazonaws.com
Authorization: AWS authentication string
Date: Thu, 17 May 2012 19:37:58 GMT
Other required headers
...
Where do I find my AWS authentication string?
What are the "Other required headers"
Distribution ID I can find on the Cloudfront admin page on the web
I am totally lost - need the real beginners guide here, step by step, ideally cross referenced to the Cloudfront admin page on the web. I'm a C#/SQL desktop apps dev normally, so this is way out of comfort zone.
Have ended up using the Amazon SDK for .NET
Actually, it probably is a matter of aws cloudfront blah blah
When typing aws cloudfront in a recent version of the AWS CLI one gets:
This service is only available as a preview service.
However, if you'd like to use a basic set of cloudfront commands with the
AWS CLI, you can enable this service by adding the following to your CLI
config file:
[preview]
cloudfront=true
Following this advice, you get the command aws cloudfront create-distribution.
I am personally still struggling to find the correct input for required params like --distribution-config
Running aws configure set preview.cloudfront true did not work for me. In my case (Linux), the following worked:
Edit the file ~/.aws/config and add the content
[preview]
cloudfront=true
To its end. This topic helped, but not mentioned the [brackets].

How do I extract web wiki pages, which are password protected?

I wish to get a few web pages and the sub-links on those which are password protected. I have the user name and the password and can access them from the normal browser UI. But As I wish to save these pages to my local drive for later reference, I am using WGET to get them:
wget --http-user=USER --http-password=PASS http://mywiki.mydomain.com/myproject
But the above is not working, as it asks for the password again. Is there any better way to do this, without getting stuck with the system asking for the password again. Also, what is the best option to get all the links and sub-links on a particular page and store them to a single folder.
Update:
The actual page I am trying to access is behind a HTTPS gateway, and the certificate for the same is not gettin g validated. Is there any way to get through this?
mysystem-dsktp ~ $ wget --http-user=USER --http-password=PASS https://secure.site.mydomain.com/login?url=http://mywiki.mydomain.com%2fsite%2fmyproject%2f
--2010-01-24 18:09:21-- https://secure.site.mydomain.com/login?url=http://mywiki.mydomain.com%2fsite%2fmyproject%2f
Resolving secure.site.mydomain.com... 124.123.23.12, 124.123.23.267, 124.123.102.191, ...
Connecting to secure.site.mydomain.com|124.123.23.12|:443... connected.
ERROR: cannot verify secure.site.mydomain.com's certificate, issued by `/C=US/O=Equifax/OU=Equifax Secure Certificate Authority':
Unable to locally verify the issuer's authority.
To connect to secure.site.mydomain.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
I tried the --no-check-certificate option also, it is not working. I only get the login page with this option and not the actual page I requested.
Could you try like this?
wget http://USER:PASSWD#mywiki.mydomain.com/myproject
Seems you're trying to access a page secured by a form.
You could to use that --no-check-certificate option and to follow this forum thread suggestions: Can't log in with wget.