I'm trying to create a C++ library that uses the TFVC REST API so that I can support TFS from within a program.
I've been successful using rapidjson and chilkat to build and send requests for a lot of the functionality so far-- add, delete, rename, etc.
My issue is that I cannot seem to apply any locks. I want users to be able to 'checkout' a file and to do so a lock must be applied.
This is for a TFS 2017 Server. Here's a link to the TFVC REST API docs
https://learn.microsoft.com/en-us/rest/api/azure/devops/tfvc/changesets/create?view=azure-devops-rest-5.0#versioncontrolchangetype
Here's my test:
{
"changes":[
{
"changeType":"lock",
"item":{
"contentMetadata":{
"contentType":"rawText",
"encoding":1200
},
"path":"$/TFStestAT/TextFile1.txt",
"version":"131"
}
}
],
"comment":"(sample) Locking a file via Advanced REST Client"
}
Here's the response:
{
"$id": "1",
"innerException": null,
"message": "The specified change type Lock is not supported.",
"typeName": "System.ArgumentException, mscorlib",
"typeKey": "ArgumentException",
"errorCode": 0,
"eventId": 0
}
With no "checkout" changeType and Lock not being supported, how should I go about locking?
Any tips would be greatly appreciated!
Related
Hy, I'm implementing a custom auth flow on a Cognito User Pool. I managed to handle the Define- and CreateAuthChallenge-triggers, but not the VerifyAuthChallenge.
I use this documentation as a guide: Verify Auth Challenge Response Lambda Trigger
I take the verify-lambda input and add answerCorrect = true to the response, as described in the documentation. Define- and CreateChallenge-parts work as expected with the given information. Verifying the challenge answers, I get InvalidLambdaResponseException: Unrecognizable lambda output as a response. The verify-lambda exists successfully, returning this object:
{
"version": 1,
"triggerSource": "VerifyAuthChallengeResponse_Authentication",
"region": "eu-central-1",
"userPoolId": "eu-central-1_XXXXXXXXX",
"callerContext": {
"awsSdkVersion": "aws-sdk-dotnet-coreclr-3.3.12.7",
"clientId": "2490gqsa3gXXXXXXXXXXXXXXXX"
},
"request": {
"challengeAnswer": "{\"DeviceSub\":\"TestSub\"}",
"privateChallengeParameters": {
"CUSTOM_CHALLENGE": "SessionService_SendDevice"
},
"userAttributes": {
"sub": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX",
"email_verified": "true",
"cognito:user_status": "CONFIRMED",
"email": "X.XXXXXXXX#XXXXXXXXXX.de"
}
},
"response": {
"answerCorrect": true
},
"userName": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX"
}
Before, I ran into the problem, that the "challengeAnswer"-part was described as a Dictionary in the documentation, but it actually is just a string, containing the dictionary as json. Sadly, I cannot find any information anywhere for why the returned object isn't accepted by Cognito.
Apparently someone had the same problem as me, using JavaScript: GitHub link
Can anyone tell me, what the response object should look like, so that it is accepted by Cognito? Thank you.
Well, so my mistake was to not consider the custom authentication flow. I found a different documentation, which is by the way the one you should definitely use:
Customizing your user pool authentication flow
I ran into 2 wrong parts in the documentation here (the triggers sub-pages) and 1 error on my part.
Wrong part 1:
DefineAuthChallenge and CreateAuthChallenge inputs for the session is defined as a list of challenge results. This is all fine, but the challenge result object has the challenge metadata part wrongly displayed of being written like this: "ChallengeMetaData", when instead it should be "ChallengeMetadata", with a lower case "d" for "data" instead of an upper case one. This gave me the "Unrecognized lambda output"-error, because "ChallengeMetaData" wasn't what the backend was expecting, it was looking for "ChallengeMetadata", which wasn't present. The first time you enter the define auth challenge lambda, this error doesn't show up, because the session doesn't contain any challenge answers. The moment you verify a challenge though, this gets filled and then the uppercase d gives you troubles.
Wrong part 2:
As described in my question, the VerifyAuthChallenge input for the "challengeAnswer" is a string, not a Dictionary.
All these wrong parts are correctly displayed on the first documentation page I linked here. So I would recommend using that instead of the other documentation.
Error on my side:
I didn't really check what happens after you verify a custom challenge via the VerifyAuthChallenge-trigger. In the given link, in the image above the headline 'DefineAuthChallenge: The challenges (state machine) Lambda trigger', it clearly states, that after verifying the response, the DefineAuthChallenge trigger is invoked again, which I didn't consider.
I hope I could save someone the time it took for me to figure this out with this :-)
Just wondering whats the best practice for determining what permissions I should give for my CloudFormation template?
After some time of trying to give the minimal permissions it require, I find that thats really time consuming and error prone. I note that depending on the state of my stack, really new vs some updates vs delete, I will need different permissions.
I guess, it should be possible for there to be some parser that given a CloudFormation template can determine the minimum set of permissions it require?
Maybe I can give ec2:* access to resources tagged Cost Center: My Project Name? Is this ok? But I wonder what happens when I change my project name for example?
Alternatively, isit ok to assume its ok to give say ec2:* access based on the assumption the CloudFormation parts is usually only executed off CodeCommit/Github/CodePipeline and its not something that is likely to be public/easy to hack? --- Tho this sounds like a flawed statement to me ...
In the short term, you can use aws-leastprivilege. But it doesn't support every resource type.
For the long term: as mentioned in this 2019 re:invent talk, CloudFormation is working towards open sourcing and migrating most of its resource types to a new public resource schema. One of the benefits of this is that you'll be able to see the permissions required to perform each operation.
E.g. for AWS::ImageBuilder::Image, the schema says
"handlers": {
"create": {
"permissions": [
"iam:GetRole",
"imagebuilder:GetImageRecipe",
"imagebuilder:GetInfrastructureConfiguration",
"imagebuilder:GetDistributionConfiguration",
"imagebuilder:GetImage",
"imagebuilder:CreateImage",
"imagebuilder:TagResource"
]
},
"read": {
"permissions": [
"imagebuilder:GetImage"
]
},
"delete": {
"permissions": [
"imagebuilder:GetImage",
"imagebuilder:DeleteImage",
"imagebuilder:UnTagResource"
]
},
"list": {
"permissions": [
"imagebuilder:ListImages"
]
}
}
I can create a SQL DW using ARM no problem. However, the portal supports an option of also installing a sample database - e.g. AdventureWorksDW. How can I do the equivalent using an ARM script?
BTW, I clicked on "automation options" on the portal add it shows an ARM script with an extension that probably is the piece that installs the sample database, but it asks for some parameters (e.g. storageKey, storageUri) that I don't know.
Here's what I think is the relevant portion of the ARM JSON:
"name": "PolybaseImport",
"type": "extensions",
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[concat('Microsoft.Sql/servers/', parameters('serverName'), '/databases/', parameters('databaseName'))]"
],
"properties": {
"storageKeyType": "[parameters('storageKeyType')]",
"storageKey": "[parameters('storageKey')]",
"storageUri": "[parameters('storageUri')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"operationMode": "PolybaseImport"
}
More specifically, looking at the ARM deploy script generated from the portal, here are the key elements that I need to know in order to auto deploy using my own ARM script:
…
"storageKey": {
"value": null <- without knowing this, I can’t deploy.
},
"storageKeyType": {
"value": "SharedAccessKey"
},
"storageUri": {
"value": https://sqldwsamplesdefault.blob.core.windows.net/adventureworksdw/AdventureWorksDWPolybaseImport/Manifest.xml <- this is not a public blob, so can’t look at it
},
…
AFAIK that's currently not possible. The portal kicks off a workflow that provisions the new DW resources, generates the sample DW schema then loads data. The sample is stored in a non-public blob so you won't be able to access it.
I don't think it's hard to make it available publicly but it does take some work so perhaps you should add a suggestion here: https://feedback.azure.com/forums/307516-sql-data-warehouse
I am trying to do a JSON Restful web service in C/C++.
I have tried Axis2/C and Staff, which work great for XML serialization/deserialization but not for JSON.
You might want to take a look at Casablanca introduced in Herb Sutter's blog.
there are a small number of libraries that support creating rest services with c, e.g. restinio:
#include <restinio/all.hpp>
int main()
{
restinio::run(
restinio::on_this_thread()
.port(8080)
.address("localhost")
.request_handler([](auto req) {
return req->create_response().set_body("Hello, World!").done();
}));
return 0;
}
try https://github.com/babelouest/ulfius great library to build C/C++ Restful APIs. can support all platforms: Linux, FreeBSD, Windows and others
Try ngrest. It's a simple but fast C++ RESTful JSON Web Services framework. It can be deployed on top of Apache2, Nginx or own simple http server.
Regarding Axis2/C with JSON. It's seems that official Axis2/C no longer maintained. So Axis2/C become obsolete (but still works).
JSON support for Axis2/C is available in axis2c-unofficial project.
There are an installation manuals on how to install Axis2/C with JSON support under Linux, Windows using binary package, Windows from source code.
You can try it with WSF Staff using Customers (REST) example in JSON mode (which is available from staff/samples/rest/webclient directory of staff source code).
You could look at ffead-cpp. Apart from providing support for json and restfull web services it also includes more features. This framework may be too heavy weight for your situation though.
For C++ web service, I am using the following stack:
ipkn/crow C++ micro web framework
nlohmann/json for json serialization/deserialization.
Take a look at Oat++
It has:
URL routing with URL-parameters mapping
Support for Swagger-UI endpoint annotations.
Object-Mapping with JSON support.
Example endpoint:
ENDPOINT("GET", "users/{name}", getUserByName, PATH(String, name)) {
auto userDto = UserDto::createShared();
userDto->name = name;
return createDtoResponse(Status::CODE_200, userDto);
}
Curl:
$ curl http://localhost:8000/users/john
{"name":"john"}
You may want to take a look at webcc.
It's a lightweight C++ HTTP client and server library for embedding purpose based on Boost.Asio (1.66+).
It's quite promising and actively being developed.
It includes a lot of examples to demonstrate how to create a server and client.
There is a JIRA project resolved the support of JSON in AXIS2/C .
I implemented in my project and I managed with the writer (Badgerfish convention) but still I am trying to manage with the reader.It seems more complicated managing with the stack in the memory.
JSON and JSONPath are supported for both C and C++ in gsoap with a new code generator and a new JSON API to get you started quickly.
Several JSON, JSON-RPC and REST examples are included. Memory management is automatic.
The code generator can be useful. Take for example the json.org menu.json snippet:
{ "menu": {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}
}
The gsoap command jsoncpp -M menu.json generates this code to populate a JSON value:
value x(ctx);
x["menu"]["id"] = "file";
x["menu"]["value"] = "File";
x["menu"]["popup"]["menuitem"][0]["value"] = "New";
x["menu"]["popup"]["menuitem"][0]["onclick"] = "CreateNewDoc()";
x["menu"]["popup"]["menuitem"][1]["value"] = "Open";
x["menu"]["popup"]["menuitem"][1]["onclick"] = "OpenDoc()";
x["menu"]["popup"]["menuitem"][2]["value"] = "Close";
x["menu"]["popup"]["menuitem"][2]["onclick"] = "CloseDoc()";
Also reading parsed JSON values and JSONPath code can be generated by this tool.
EDIT
To clarify, the jsoncpp command-line code generator shows the API code to read and write JSON data by using a .json file as a template, which I found is useful to save time to write the API code to populate and extract JSON data. JSONPath query code can also be generated with this tool.
For web service in C, you can leverage library like ulfius, civetweb:
https://github.com/babelouest/ulfius
https://github.com/civetweb/civetweb/blob/master/docs/Embedding.md
For web service in C++, you can leverage library like libhv, restbed:
https://github.com/ithewei/libhv
https://github.com/Corvusoft/restbed
I'm currently working on an alternative way to view the threads and messages. But I have problems figuring out how to display the images attached to a message.
I have a GET request to this url: https://graph.facebook.com/t_id.T_ID/messages?access_token=ACCESS_TOKEN. And the response includes
"attachments": {
"data": [
{
"id": "df732cf372bf07f29030b5d44313038c",
"mime_type": "image/jpeg",
"name": "image.jpg",
"size": 76321
}
]
}
but I can't find any way to access the image.
Thanks
Support for this hasn't yet been added to the Graph API and as with many of the other messaging APIs, it's currently only avaialable for testing (i.e you must be a developer of the app to use it presently)
There's an undocumented REST API endpoint for this, which should work for any app (that you're the developer of, as above).
To use the REST method to get the attachment data, it's
https://api.facebook.com/method/messaging.getattachment
With parameters:
access_token=YOUR_ACCESS_TOKEN
mid=MESSAGE_ID
aid=ATTACHMENT_ID
format=json //(it defaults to XML otherwise)
The response is like this:
{"content_type":"image\/png","filename":"Screen Shot 2012-02-08 at 11.35.35.png","file_size":42257,"data":<FILE CONTENTS>}
I've just tested this and it worked OK for me, taking the <FILE CONTENTS> and base64 decoding them gave me back the original image correctly