We have some problem trying to list Google Container Registry images via Docker Registry HTTP API V2
When executing this command:
curl -k -s -X GET "https://eu.gcr.io/v2/_catalog" -L -H 'Authorization: Bearer ${TOKEN}'
Sometimes we get an empty json {"repositories":[]} and sometimes we get the correct json with all the information {"repositories":["projectname/image1","projectname/image1"]}
Does Google Container Registry fully support Docker Registry HTTP API V2?
Is anybody else experiencing this issue?
I created my own Mapreduce jar file and tested in the Cosmos' old Hadoop cluster successfully using the HDFS shell commands. The next step was to test the same jar in the new cluster, so I uploaded it to the
new cluster's HDFS, to my home folder (user/my.username).
When I try to start a Mapreduce job using the curl post below,
curl -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/my.username/jobs" -d '{"jar":"dt.jar","class_name":"DistanceTest","lib_jars":"dt.jar","input":"input","output":"output"}' -H "Content-Type: application/json" -H "X-Auth-Token: xxxxxxxxxxxxxxxxxxx"
I get:
{"success":"false","error":255}
I tried different path values for the jar and I get the same result. Do I have to upload my jar to somewhere else or am I missing some necessary steps?
There was a bug in the code, already fixed in the global instance of FIWARE Lab.
I've tested this:
$ curl -X POST "http://computing.cosmos.lab.fiware.org:12000/tidoop/v1/user/myuser/jobs" -d '{"jar":"mrjars/hadoop-mapreduce-examples.jar","class_name":"wordcount","lib_jars":"mrjars/hadoop-mapreduce-examples.jar","input":"testdir","output":"outputdir"}' -H "Content-Type: application/json" -H "X-Auth-Token: mytoken"
{"success":"true","job_id": "job_1460639183882_0016"}
Please observe in this case, mrjars/hadoop-mapreduce-examples.jar is a relative path under hdfs:///user/myuser. Always use relative paths with this opeartion.
$ curl -X GET "http://storage.cosmos.lab.fiware.org:14000/webhdfs/v1/user/myuser/mrjars?op=liststatus&user.name=myuser" -H "X-Auth-Token: mytoken"
{"FileStatuses":{"FileStatus":[{"pathSuffix":"hadoop-mapreduce-examples.jar","type":"FILE","length":270283,"owner":"myuser","group":"myuser","permission":"644","accessTime":1464702215233,"modificationTime":1464702215479,"blockSize":134217728,"replication":3}]}}
The result:
$ curl -X GET "http://storage.cosmos.lab.fiware.org:14000/webhdfs/v1/user/myuser/outputdir?op=liststatus&user.name=myuser" -H "X-Auth-Token: mytoken"
{"FileStatuses":{"FileStatus":[{"pathSuffix":"_SUCCESS","type":"FILE","length":0,"owner":"myuser","group":"myuser","permission":"644","accessTime":1464706333691,"modificationTime":1464706333706,"blockSize":134217728,"replication":3},{"pathSuffix":"part-r-00000","type":"FILE","length":128,"owner":"myuser","group":"myuser","permission":"644","accessTime":1464706333264,"modificationTime":1464706333460,"blockSize":134217728,"replication":3}]}}
I'll upload the fix to the Cosmos repo in GitHub ASAP.
I have a MeteorJS app using Mupx.
This is less stable version that uses Docker to deploy. Now that I have deployed it, I am wondering how one can get access to the server logs.
In the non-Docker version, apparently you just run mup logs -f, but it's not properly documented as how to do so with the Docker variant.
Any suggestions?
UPDATE:
I have since discovered you can use docker commands directly:
docker ps will show the id of your application.
docker logs -f ${id} will tail the logs.
Mupx githup page claims that 'mupx logs -f' should work.
"It works on my machine."
I have a django app. I'm followed this tutorial. OAuth2 works great on my dev box like this:
$ curl -v -H "Authorization: OAuth c52676b24a63b79a564b4ed38db3ac5439e51d47" http://localhost:8000/api/v1/my-model/?format=json
My local dev app finds the header with this line of code:
auth_header_value = request.META.get('HTTP_AUTHORIZATION')
But when I deploy it to my ubuntu box running apache it doesn't.
I added the following to my authentication.py file so I could inspect the values in the log on the remote machine.
logging.error(request.GET)
logging.error(request.POST)
logging.error(request.META)
The header value is mysteriously missing from the output. So I just get 401s.
Did you turn on WSGIPassAuthorization?
http://modwsgi.readthedocs.org/en/latest/configuration-directives/WSGIPassAuthorization.html
Authorisation headers are not passed through by default as doing so
could leak information about passwords through to a WSGI application
which should not be able to see them when Apache is performing
authorisation.
Hi I am having a Jenkins build pipeline like this: 1. builds the app and deploys to Artifactory; 2. runs an SSH exec command on the test server (remote) to download the artifacts and deploys them into the right directory; 3. runs web tests against the test server, if passed, changes the build status in Artifactory to something like pre-staging for further manual UAT testing. My question is, how to change the build status in Artifactory from a Jenkins job. If using Artifactory's RESTAPI is necessary, can someone share an example? Much appreciated!
Yes, REST API is the easiest way.
You need to perform a Build Promotion call. Please note it requires Artifactory Pro.
It's a POST request, accepting simple json string, in which only two properties are mandatory: status and ciUser.
The call should look something like this:
curl -X POST -u admin:password -H "Content-Type: application/json" -d '{"status":"tests passed","ciUser":"jenkinsAdmin"}' "http://localhost:8081/artifactory/api/build/promote/buildName/buildNumber"