riak-cs create bucket fails with : bucket does not exist - riak-cs

I'm attempting to create a bucket in RiakCS using a curl PUT command. And the command is failing. This is the command I'm giving:
curl -ks https://p-riakcs.example.com/user:secret/riak/test -XPUT -H Content-Type:application/json --data-binary '{"props":{"n_val":5}}'
And this is the error I'm getting:
<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist.</Message><Resource>/user:secret/riak/test</Resource><RequestId></RequestId></Error>
I was under the impression if I give RiakCS the name of a new bucket, it should create the new bucket on the fly.
How can I give this command correctly and create a new bucket in riakcs?

Related

What is google Cloud CLI command to retrieve current date populated files presented in the bucket ? without specifying date manually

What is google Cloud CLI command to retrieve current dated files presented in the Google Storage bucket ? without specifying date manually. Do we have any sysdate or createdate kind of command ?
With this CLI Command, you can get files with the current date from the bucket you specify:
gsutil ls -l gs://[your-bucket-name]/ | sort -r -k 2
See sample output below:

authentication for GCP STT Quickstart problem

I am following the GCP Speech-to-Text Quickstart. As best as I can tell, I have followed all setup criteria.
Enabled STT for my project.
Generated Service API keys and downloaded the JSON file.
Installed SDK and initialized.
In Windows CMD shell, I set GOOGLE_APPLICATION_CREDENTIALS to the downloaded JSON file.
In Windows CMD shell, I executed gcloud auth activate-service-account <my service email generated by GCP> --key-file= "mypath/JSON key file".
I executed gcloud auth list and I see my project account identifier.
I executed the example curl command:
curl -s -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://speech.googleapis.com/v1/speech:recognize -d #sync-request.json
And get this error:
{
"error": {
"code": 401,
"message": "Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"status": "UNAUTHENTICATED"
}
}
No where in the Quickstart steps does it mention OAuth
As a test, I executed:
gcloud auth application-default print-access-token
And got this:
(gcloud.auth.application-default.print-access-token) File "mypath/JSON key file" was not found.
Even though the file exists in the folder I specify.
Trying something else, I tried executing the Java example in the SDK. It creates a very simple SpeechClient with no credentials, which seems suspect. I made the GOOGLE_APPLICATION_CREDENTIALS env variable available to the example. I think the example uses gRCP, but not sure.
The example hangs at:
RecognizeResponse response = speech.recognize(config, audio);
Looking online, I found the likely suspect is bad authentication, which is the same as trying the CMD line example.
Any and all guidance is appreciated.
Did you run the curl command from the same directory where your JSON key file is located?
Google's documentation states the following:
Note that to pass a filename to curl you use the -d option (for
"data") and precede the filename with an # sign. This file should be
in the same directory in which you execute the curl command.
I have the answer to the CLI issue. A dumb mistake on my part. When I set GOOGLE_APPLICATION_CREDENTIALS I wrapped the pathname in double quotes. Sigh. I reset the env variable without the double quotes.
I could successfully run gcloud auth application-default print-access-token and it printed out the token.
I tried the curl command again with $(gcloud auth....) and got same error. Next, I tried the curl command replacing the $(gcloud auth....) with the token returned above and it worked!
Next, I need to resolve the Java example and I am good.
No need to be suspicious:
If you don't specify credentials when constructing the client, the client library will look for credentials via the environment variable GOOGLE_APPLICATION_CREDENTIALS.
In your java code try to print System.getenv("GOOGLE_APPLICATION_CREDENTIALS"), to verify it's set . Probably it's not, depending on how you are setting it in your IDE, or terminal.

jenkinsfile - copy files to s3 and make public

I am uploading a website to an s3 bucket for hosting, I upload from a jenkins build job using this in the jenkins file
withAWS(credentials:'aws-cred') {
sh 'npm install'
sh 'ng build --prod'
s3Upload(
file: 'dist/topic-creation',
bucket: 'bucketName',
acl:'PublicRead'
)
}
After this step I go to the s3 bucket and get the URL (I have configured the bucket for hosting), when i go to the endpoint url I get a 403 error. When i go back to bucket and give all the items that got uploaded public access, then the URL brings me to my website.
I don't want to make the bucket public, I want to give the files public access, I thought adding the line acl:'PublicRead' which can be seen above would do this but it does not.
Can anyone tell me how I can upload the files and give public access from a jenkins file?
Thanks
Install S3Publisher plugin on your Jenkins instance: https://plugins.jenkins.io/s3/
In order to upload the local artifacts with public access onto your S3 bucket , use the following command (You can also use the Jenkins Pipeline Syntax):
def identity=awsIdentity();
s3Upload acl: 'PublicRead', bucket: 'NAME_OF_S3_BUCKET', file: 'THE_ARTIFACT_TO_BE_UPLOADED_FROM_JENKINS', path: "PATH_ON_S3_BUCKET", workingDir: '.'
In case of a Free-style build, here's a sample:

Downloading multiple files from S3 using 'aws cli' in python

I am trying to download multiple files from S3 using aws cli in python. Using pip install I installed aws cli and was able to successfully pass credentials. But when I try to download multiple files, I get following error:
fatal error: invalid literal for int() with base 10: 'us-east-1.amazonaws.com'
My code to download the file looks like this:
aws s3 cp "s3://buckets/testtelligence/saurav_shekhar/test/" "C:/HD/Profile data/Downloads" --recursive
Also, my C:\Users\USERNAME\.aws\config is
[default]
region = Default region name [None]:us-east-1
output = Default output format [None]: table
I am not sure what that error means and how to resolve this.
The contents of your .aws/config file should look like:
[default]
region = us-east-1
output = table

Unable to use custom Mapreduce jar files in Cosmos

I created my own Mapreduce jar file and tested in the Cosmos' old Hadoop cluster successfully using the HDFS shell commands. The next step was to test the same jar in the new cluster, so I uploaded it to the
new cluster's HDFS, to my home folder (user/my.username).
When I try to start a Mapreduce job using the curl post below,
curl -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/my.username/jobs" -d '{"jar":"dt.jar","class_name":"DistanceTest","lib_jars":"dt.jar","input":"input","output":"output"}' -H "Content-Type: application/json" -H "X-Auth-Token: xxxxxxxxxxxxxxxxxxx"
I get:
{"success":"false","error":255}
I tried different path values for the jar and I get the same result. Do I have to upload my jar to somewhere else or am I missing some necessary steps?
There was a bug in the code, already fixed in the global instance of FIWARE Lab.
I've tested this:
$ curl -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/myuser/jobs" -d '{"jar":"mrjars/hadoop-mapreduce-examples.jar","class_name":"wordcount","lib_jars":"mrjars/hadoop-mapreduce-examples.jar","input":"testdir","output":"outputdir"}' -H "Content-Type: application/json" -H "X-Auth-Token: mytoken"
{"success":"true","job_id": "job_1460639183882_0016"}
Please observe in this case, mrjars/hadoop-mapreduce-examples.jar is a relative path under hdfs:///user/myuser. Always use relative paths with this opeartion.
$ curl -X GET "http://storage.cosmos.lab.fiware.org:14000/webhdfs/v1/user/myuser/mrjars?op=liststatus&user.name=myuser" -H "X-Auth-Token: mytoken"
{"FileStatuses":{"FileStatus":[{"pathSuffix":"hadoop-mapreduce-examples.jar","type":"FILE","length":270283,"owner":"myuser","group":"myuser","permission":"644","accessTime":1464702215233,"modificationTime":1464702215479,"blockSize":134217728,"replication":3}]}}
The result:
$ curl -X GET "http://storage.cosmos.lab.fiware.org:14000/webhdfs/v1/user/myuser/outputdir?op=liststatus&user.name=myuser" -H "X-Auth-Token: mytoken"
{"FileStatuses":{"FileStatus":[{"pathSuffix":"_SUCCESS","type":"FILE","length":0,"owner":"myuser","group":"myuser","permission":"644","accessTime":1464706333691,"modificationTime":1464706333706,"blockSize":134217728,"replication":3},{"pathSuffix":"part-r-00000","type":"FILE","length":128,"owner":"myuser","group":"myuser","permission":"644","accessTime":1464706333264,"modificationTime":1464706333460,"blockSize":134217728,"replication":3}]}}
I'll upload the fix to the Cosmos repo in GitHub ASAP.