gcurl: command not found in Google Cloud Shell - google-cloud-platform

In Google Cloud Shell, I would like to see a list of enabled service,
When I put the following command
gcurl "https://serviceusage.googleapis.com/v1/proj
ects/myProjectId/services?filter=state:ENABLED"
Then I got this error.
-bash: gcurl: command not found
How to install gcurl?

gcurl is an alias for regular curl plus some headers:
alias gcurl='curl -H "$(oauth2l header --json ~/credentials.json cloud-platform userinfo.email)" -H "Content-Type: application/json"'
Please see here for more details.

Related

Add files to S3 Bucket using Shell Script

Goal: to push files in gri/ to S3 bucket using SendToS3.sh shell script.
I am following this Tutorial.
SendToS3.sh is in cwd. It needs to fetch all files, that are not in sub-folders, in cwd's gri/.
Terminal:
me#PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/data$ ./SendToS3.sh
./SendToS3.sh: line 17: logInfo: command not found
curl: Can't open '/gri/*'!
curl: try 'curl --help' or 'curl --manual' for more information
curl: (26) Failed to open/read local data from file/application
./SendToS3.sh: line 27: logInfo: command not found
SendToS3.sh:
bucket=simulation
files_location=/gri/ # !
now_time=$(date +"%H%M%S")
contentType="application/x-compressed-tar"
dateValue=`date -R`
# your key goes here..
s3Key= # CENSORED
# your secrets goes here..
s3Secret= # CENSORED
function pushToS3()
{
files_path=$1
for file in $files_path*
do
fname=$(basename $file)
logInfo "Start sending $fname to S3"
resource="/${bucket}/${now_date}/${fname}_${now_time}"
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -X PUT -T "${file}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${now_date}/${fname}_${now_time}
logInfo "$fname has been sent to S3 successfully."
done
}
pushToS3 $files_location
Please let me know if there is anything else I can add to post.
Your system doesn't have loginfo, so maybe switch that command to echo. For your curl error it could be a file permission errors, try running:
chmod -rwx gri.
Alternatively, you could use the aws cli instead, which is much easier to use imo.
The error is at this following line. The folder /gri/ is empty or the user launching the script cannot have access to it.
curl: Can't open '/gri/*'!
Moreover, it seems that your server does not have the executable LogInfo installed, or it is not accessible from your script SendToS3.sh.
Verify the installation and add the binary to the PATH env variable.
./SendToS3.sh: line 17: logInfo: command not found
Bonus: instead of using curl, you can use aws-cli which is optimized to interract with aws components. Please find the documentation for s3 here: https://docs.aws.amazon.com/cli/latest/reference/s3/
For example, you can copy a file to w bucket with this command:
aws s3 cp <path_to_file> s3://<bucket_name>/<path>/

How to check for Jupyter active notebooks through command line

I have an AWS EMR running Jupyterhub version 0.8.1+ that I want to check if there are any active notebooks that are running any code.
I've tried the below commands but they don't seem to output what I'm looking for here since the users server is always running and notebooks can be running without any code being executed.
# only lists running servers and jovyan is always running.
sudo docker exec jupyterhub jupyter notebook list
# No useful information outputted
curl -k -i -H "Accept: application/json" "https://localhost:9443/api/sessions"
# always lists processes regardless of running notebooks
ps aux | grep ipykernel
# The last_activity only updates when a user creates a new file or folder in the ui.
curl -k https://localhost:9443/hub/api/users/$user -H "Authorization: token $admin_token" | jq -r .last_activity
curl -k https://localhost:9443/hub/api/users -H "Authorization: token $admin_token" | jq -r .last_activity
Im following this AWS blog to check if the entire EMR is idle before terminating the cluster but they never seemed to have fully implemented the jupyter checks.
https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-costs-with-idle-checks-and-automatic-resource-termination-using-advanced-amazon-cloudwatch-metrics-and-aws-lambda/
Most of the files referenced can be found in Github https://github.com/septian-putra/emr-monitoring
To see if a notebook is "idle" for "busy" you can run curl -ks https://localhost:9443/user/jovyan/api/kernels -H "Authorization: token ${admin_token}"
With this command all you need to do is put it in a simple if statement with a grep -q in order to get a true false idle value.
if [ $(curl -ks https://localhost:9443/user/jovyan/api/kernels -H "Authorization: token ${admin_token}" | grep -q "busy") ]; then
JUPYTER_BUSY_NOTEBOOKS=1
else
JUPYTER_BUSY_NOTEBOOKS=0
fi
(curl -ks For a silent output and to ignore ssl. jovyan being my admin user)
Documentation
https://jupyter-kernel-gateway.readthedocs.io/en/latest/websocket-mode.html#http-resources
/api/sessions might also be useful to look at.

wso2 mutual ssl curl command APIM 3.2.0

I tried setup ALLINONE in my local, followed the documentation https://apim.docs.wso2.com/en/latest/learn/api-security/api-authentication/secure-apis-using-mutual-ssl/
it worked, but what will be the curl command for request because the document talks about testing only through postman
You can use the following curl commands when you want to use the header-based approach.
curl -X GET -H "X-WSO2-CLIENT-CERTIFICATE: (Base64 encoded public cert)" "https://localhost:8243/mock/v1" -v
curl -X GET -H "X-WSO2-CLIENT-CERTIFICATE: (Base64 encoded public cert)" "http://localhost:8280/mock/v1" -v
In order to work this, you need to add the following configuration to the deployment.toml in wso2am-3.2.0/repository/conf location.
[apimgt.mutual_ssl]
enable_client_validation = false
You can use the following curl commands if you are using the cert and key.
curl -k --cert int.ext.wso2.com.crt --key int.ext.wso2.com.key -X GET "https://localhost:8243/mock/v1" -v

How to connect to AWS simple AD using ldapsearch?

I've created a simple AD on AWS and I'm trying to connect to it using the Administrator credentials set up while creating the simple AD. I'm running the ldapsearch command from another EC2 instance in the same subnet. However I"m running into an authentication error and I'm pretty sure it's not the password, as I've tried changing it multiple time with no luck.
Below is the ldapsearch command I'm using.
$ldapsearch -x -v -h "10.*.*.112" -b "dc=corp-testing,dc=example,dc=com" –D "Administrator#corp-testing.example.com" -W sAMAccountName=Administrator
Below is the output:
ldap_initialize( ldap://10.*.*.112 )
Enter LDAP Password:
ldap_bind: Invalid credentials (49)
additional info: 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db1
Would someone be able to point out the issue on this?
I ran into the same issue and I have found the solution, the username needs to be prefixed with the Directory NetBIOS name (this is available from the Directory details page), then login with:
ldapsearch -x \
-h 10.*.*.112 \
-b "cn=Users,dc=corp-testing,dc=example,dc=com" \
–D "${NetBIOSNAME}\\Administrator" \
-W sAMAccountName=Administrator
Obviously, change ${NetBIOSNAME} to the appropriate value.
Okay I figured it out, however I don't know the WHY. Try changing your search to:
ldapsearch -x -v -H "ldap://10.*.*.112:389/" -b "dc=corp-testing,dc=example,dc=com" –D "cn=Administrator,dc=corp-testing,dc=example,dc=com" -W sAMAccountName=Administrator
I tried this several times without the trailing / on the URI but it didn't work.
Problem was that the Administrator is inside of the Users node, so -D should include cn=Users also:
ldapsearch -x -v -h "10.*.*.112" -b "dc=corp-testing,dc=example,dc=com" –D "cn=Administrator,cn=Users,dc=corp-testing,dc=example,dc=com" -W sAMAccountName=Administrator

How to test web service using command line curl

I am building a web service for a web application, and I would like a simple tool to test this as I am developing. I have tried some firefox plug-ins (Poster, 'REST Client'), and even though these work fine I have been unable to upload files with them.
Also, I would rather have a command-line tool that I can use to easily write a set of integration tests for this web service and that I can send to consumers of this web service as an example.
I know that curl can work for this but would like a few examples, especially around authentication (using HTTP Basic) and file uploads.
Answering my own question.
curl -X GET --basic --user username:password \
https://www.example.com/mobile/resource
curl -X DELETE --basic --user username:password \
https://www.example.com/mobile/resource
curl -X PUT --basic --user username:password -d 'param1_name=param1_value' \
-d 'param2_name=param2_value' https://www.example.com/mobile/resource
POSTing a file and additional parameter
curl -X POST -F 'param_name=#/filepath/filename' \
-F 'extra_param_name=extra_param_value' --basic --user username:password \
https://www.example.com/mobile/resource
In addition to existing answers it is often desired to format the REST output (typically JSON and XML lacks indentation). Try this:
$ curl https://api.twitter.com/1/help/configuration.xml | xmllint --format -
$ curl https://api.twitter.com/1/help/configuration.json | python -mjson.tool
Tested on Ubuntu 11.0.4/11.10.
Another issue is the desired content type. Twitter uses .xml/.json extension, but more idiomatic REST would require Accept header:
$ curl -H "Accept: application/json"
From the documentation on http://curl.haxx.se/docs/httpscripting.html :
HTTP Authentication
curl --user name:password http://www.example.com
Put a file to a HTTP server with curl:
curl --upload-file uploadfile http://www.example.com/receive.cgi
Send post data with curl:
curl --data "birthyear=1905&press=%20OK%20" http://www.example.com/when.cgi