Composer require branch name - google-cloud-platform

For example I want to require:
{
"repositories": [
{
"type": "git",
"url": "https://github.com/google/google-api-php-client.git"
}
],
"require": {
"google/apiclient": "v1-master"
}
}
In this example I try require google/apiclient on branch v1-master. I get error:
[UnexpectedValueException]
Could not parse version constraint v1-master: Invalid version string "v1-master"

You need to prefix all dev branches (= non tagged) by dev-.
To install the branch you need, use:
composer require google/apiclient:dev-v1-master
See composer docs.

this will work :
{
"repositories": [
{
"type": "git",
"url": "https://github.com/google/google-api-php-client.git"
}
],
"require": {
"google/apiclient": "dev-BRANCH_NAME"
}
}
so pattern is "dev-*", if you branch name is "bug-fix" then "dev-bug-fix"
with command line :
composer require google/apiclient:dev-BRANCH_NAME

I was trying to the same for a different Google repository which contains several packages and it took me some time to figure it out. Therefore I am sharing my solution below.
My goal is to pull latest google/cloud-compute from https://github.com/googleapis/google-cloud-php.git within master branch.
Following steps worked for me:
Clone the repository
git clone https://github.com/googleapis/google-cloud-php.git google-cloud-php
Set composer.json to use the right package from local folder:
{
"repositories": [
{
"type": "path",
"url": "/Users/USERNAME/projects/google-cloud-php/Compute"
}
],
"require": {
"google/cloud-compute": "dev-master"
}
}
Please note that in step 2 the url is pointing to the Compute subfolder where the actual google/cloud-compute package exists.
My solution could be easily tweaked for any branch, you would just need to git checkout the appropriate branch in step 1 and then change 'dev-master' to 'dev-YOUR_BRANCH' in step 2.

Related

Create a Sharepoint list in a specific folder with Microsoft graph api

Premise:
I have a folder under the sharepoint site for e.g.
https://... <mycompany-sharepoint-site.com>/
under which we have folder so my url is something like
https://... <mycompany-sharepoint-site.com>/Documents/Sub_folder_1/Sub_folder_2
pertaining to our project.
I need to be able to create a Sharepoint list in the Sub_folder_2 - folder and not at the root level.
With Sharepoint - GraphApi - create list api url
POST https://graph.microsoft.com/v1.0/sites/{site-id}/lists
I will ONLY be able to create at the <mycompany-sharepoint-site.com> level (i.e. at the root level) which is not what I want.
FYI, I already tried (on Postman) to go with the drives//items/<folder_id> - route or I should say attempted to do so but failed.
Any help is greatly appreciated.
If you want to use Graph API to create a folder in SharePoint, please use the following query:
POST /groups/{group-id}/drive/items/{parent-item-id}/children
For more information: https://learn.microsoft.com/en-us/graph/api/driveitem-post-children?view=graph-rest-1.0&tabs=http#http-request
Hope this is helpful.
I don't think that SharePoint supports creating a list inside the folder but you can at least try to create a list and specify the path in the parent reference.
You need to find out the drive id.
POST https://graph.microsoft.com/v1.0/sites/{site_id}/lists
{
"displayName": "Test",
"columns": [
{
"name": "Column1",
"text": {}
},
{
"name": "Column2",
"number": {}
}
],
"list": {
"template": "genericList"
},
"parentReference": {
"driveType": "documentLibrary",
"driveId": "{drive_id}",
"path": "/drives/{drive_id}/root:/Documents/Sub_folder_1/Sub_folder_2"
# or
# "path": "/drives/{drive_id}/root:/Sub_folder_1/Sub_folder_2"
}
}
I don't think it's even possible at all, did you succeed that manually?

awscli DescribeUserPoolClient doesn't return almost anything despite the documentation it should return all appClient settings

I'm trying to run DescribeUserPoolClient through python code and also through cloudshell, and this command doesn't return almost anything:
{
"UserPoolClient": {
"UserPoolId": "id",
"ClientName": "name",
"ClientId": "id",
"ClientSecret": "secret",
"LastModifiedDate": "2021-05-10T14:21:24.733000+00:00",
"CreationDate": "2021-05-10T14:21:24.733000+00:00",
"RefreshTokenValidity": 30,
"TokenValidityUnits": {},
"AllowedOAuthFlows": [
"client_credentials"
],
"AllowedOAuthScopes": [
":write"
],
"AllowedOAuthFlowsUserPoolClient": true
}
}
This is only parameters it returns. But documentation says that there should be a lot more like "ExplicitAuthFlows" and others. Is there something with aws or maybe something with my access rights?
For anyone having trouble with the same issue: If you have any property set by default and never touched it (never edited) amazon won't return them in request. This works for many other aws cli commands also.
Maybe it is common knowledge, but i struggled with it)

CDK adds random parameters

So I have this function I'm trying to declare and it works and deploys just dandy unless you uncomment the logRetention setting. If logRetention is specified the cdk deploy operation
adds additional parameters to the stack. And, of course, this behavior is completely unexplained in the documentation.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#log-group
SingletonFunction.Builder.create(this, "native-lambda-s3-fun")
.functionName(funcName)
.description("")
// .logRetention(RetentionDays.ONE_DAY)
.handler("app")
.timeout(Duration.seconds(300))
.runtime(Runtime.GO_1_X)
.uuid(UUID.randomUUID().toString())
.environment(new HashMap<String, String>(){{
put("FILE_KEY", "/file/key");
put("S3_BUCKET", junk.getBucketName());
}})
.code(Code.fromBucket(uploads, functionUploadKey(
"formation-examples",
"native-lambda-s3",
lambdaVersion.getValueAsString()
)))
.build();
"Parameters": {
"lambdaVersion": {
"Type": "String"
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3BucketB030C8A8": {
"Type": "String",
"Description": "S3 bucket for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3VersionKey6A2AABD7": {
"Type": "String",
"Description": "S3 key for asset version \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aArtifactHashEDC522F0": {
"Type": "String",
"Description": "Artifact hash for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
}
},
It's a bug. They're Working On Itâ„¢. So, rejoice - we can probably expect a fix sometime within the next decade.
I haven't tried it yet, but I'm guessing the workaround is to manipulate the low-level CfnLogGroup construct, since it has the authoritative retentionInDays property. The relevant high-level Log Group construct can probably be obtained from the Function via its logGroup property. Failing that, the LogGroup can be created from scratch (which will probably be a headache all on its own).
I also encountered the problem described above. From what I can tell, we are unable to specify a log group name and thus the log group name is predictable.
My solution was to simply create a LogGroup with the same name as my Lambda function with the /aws/lambda/ prefix.
Example:
var function = new Function(
this,
"Thing",
new FunctionProps
{
FunctionName = $"{Stack.Of(this).StackName}-Thing",
// ...
});
_ = new LogGroup(
this,
"ThingLogGroup",
new LogGroupProps
{
LogGroupName = $"/aws/lambda/{function.FunctionName}",
Retention = RetentionDays.ONE_MONTH,
});
This does not create unnecessary "AssetParameters..." CF template parameters like the inline option does.
Note: I'm using CDK version 1.111.0 and 1.86.0 with C#

How to apply lifecycle patters in AWS elasticsearch to many indexs

I am trying to do this in AWS elasticsearch, whereby I create a template for the pattern application-logs-*, and then I want to apply a index policy log-rotation-policy for all indexes which match that expression. I have created my policy successfully, but when I try to create a template like so:
PUT _template/application-logs
{
"index_patterns" : [
"application-logs-*"
],
"settings" : {
"index.lifecycle.name": "log-rotation-policy",
}
}
I get an error:
"type": "illegal_argument_exception",
"reason": "unknown setting [index.policy_id] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
The AWS documentation is extremely vague,
Ok sorry I thought I would post this answer anyway because as I was writing this I figured out the problem, the correct key o use is: opendistro.index_state_management.policy_id so it should be:
PUT _template/application-logs
{
"index_patterns" : [
"application-logs-*"
],
"settings" : {
"opendistro.index_state_management.policy_id": "log-rotation-policy",
}
}
I found the answer here.

Defined Template from logstash not being used by elastic search for mapping

I have the following logstash output config to go into elasticsearch from a postgres database
https://pastebin.com/BFCH3tuZ
I have defined the location and my template as the following:
https://pastebin.com/mK5qshKM
When I run logstash I see the output as follows:
[2017-05-24T20:54:10,828][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-24T20:54:10,982][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0xff97ab URL:http://localhost:9200/>}
[2017-05-24T20:54:10,985][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/etc/logstash/universe_template.json"}
[2017-05-24T20:54:11,045][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"universe_elastic", "settings"=>{"analysis"=>{"filter"=>{"gr$
[2017-05-24T20:54:11,052][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/universe_elastic
[2017-05-24T20:54:11,145][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0xe60519 URL://localhost:9200$
[2017-05-24T20:54:11,154][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf$
[2017-05-24T20:54:11,988][INFO ][logstash.pipeline ] Pipeline main started
[2017-05-24T20:54:12,079][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-24T20:54:12,108][INFO ][logstash.inputs.jdbc ] (0.101000s) select planet.id, planet.x || ':' || planet.y || ':' || planet.z coords, planet.x, planet.y, planet.z ,planetname,ru$
[2017-05-24T20:54:15,006][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
When I query elasticseach templates I can see my template listed at:
http://xxxx:9200/_template/ { "universe_elastic": {
"order": 0,
"template": "universe_elastic",
"settings": {
"index": {
"analysis": {
"filter": {
"gramFilter": {
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol"
], ETC ETC ETC......
However when I run a check on my "universe" index the mapping haven't come through:
https://pastebin.com/hw9hYfLn
I would expect to see the _all field and the include in all references set to true/false. But nothing.. Also the queries do not then use the analyzers I have specified.
Any ideas what might be going wrong here? I have deleted out all the other possible templates created as well as re-created indexes etc.
You've done it almost all correctly, you just need to change a single thing:
In your template, this line
"template": "universe_elastic",
should read
"template": "universe",
ES is going to apply the apply only if your index name matches with the template name.