Listing VMs under Manage Instance Group - google-cloud-platform

I want to list all VMs that are in Manage Instance Group using google cloud client libraries for .NET.
Application Type: Console App, .NET 7.0
Library: Google.Cloud.Compute.V1
RegionInstanceGroupManagersClient regionInstanceGroupManagersClient = await RegionInstanceGroupManagersClient.CreateAsync();
var vms = regionInstanceGroupManagersClient.ListManagedInstancesAsync("projectId", "region", "mig_name");
await foreach (var vm in vms)
{
Console.WriteLine(vm.Instance);
}
Error:
Grpc.Core.RpcException: 'Status(StatusCode="InvalidArgument",
Detail="Invalid value for field 'pageToken': ''. Supplied restart
token corresponds to a zone not supported by this managed instance
group.")'
I'm trying understand this issue as in documenation pageToken is not required in request. According to documentation if pageToken is not provided - first page will be retrieved.

You are using a region based instance group to list the vms under managed instance group instead you need to use InstanceGroupManager.Types.ListManagedInstancesResults.
In this you need to use the string public const string Pageless = "PAGELESS"
This will ignore the pageToken query parameters and return the results in a single response. Can you try this and post if you get any errors.
Pageless : (Default) Pagination is disabled for the group's
listManagedInstances API method. maxResults and pageToken query
parameters are ignored and all instances are returned in a single
response.

Related

Sagemaker Data Capture does not write files

I want to enable data capture for a specific endpoint (so far, only via the console). The endpoint works fine and also logs & returns the desired results. However, no files are written to the specified S3 location.
Endpoint Configuration
The endpoint is based on a training job with a scikit learn classifier. It has only one variant which is a ml.m4.xlarge instance type. Data Capture is enabled with a sampling percentage of 100%. As data capture storage locations I tried s3://<bucket-name> as well as s3://<bucket-name>/<some-other-path>. With the "Capture content type" I tried leaving everything blank, setting text/csv in "CSV/Text" and application/json in "JSON".
Endpoint Invokation
The endpoint is invoked in a Lambda function with a client. Here's the call:
sagemaker_body_source = {
"segments": segments,
"language": language
}
payload = json.dumps(sagemaker_body_source).encode()
response = self.client.invoke_endpoint(EndpointName=endpoint_name,
Body=payload,
ContentType='application/json',
Accept='application/json')
result = json.loads(response['Body'].read().decode())
return result["predictions"]
Internally, the endpoint uses a Flask API with an /invocation path that returns the result.
Logs
The endpoint itself works fine and the Flask API is logging input and output:
INFO:api:body: {'segments': [<strings...>], 'language': 'de'}
INFO:api:output: {'predictions': [{'text': 'some text', 'label': 'some_label'}, ....]}
Data capture can be enabled by using the SDK as shown below -
data_capture_config = DataCaptureConfig(
enable_capture=True, sampling_percentage=100, destination_s3_uri=s3_capture_upload_path
)
predictor = model.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
endpoint_name=endpoint_name,
data_capture_config=data_capture_config,
)
Make sure to reference your data capture config in your endpoint creation step. I've always seen this method to work. Can you try this and let me know? Reference notebook
NOTE - I work for AWS SageMaker , but my opinions are my own.
So the issue seemed to be related to the IAM role. The default role (ModelEndpoint-Role) does not have access to write S3 files. It worked via the SDK since it uses another role in the sagemaker studio. I did not receive any error message about this.

FHIR works on AWS server not allowing to keep customized id as primary key

We are working for FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the CloudFormation template given by AWS in our AWS environment.Following is the template that we have deployed
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or mainain client specific (customized ) ids as primary key .Infact , in the runtime, it is generating its own ids and ignoring the id given by us.
The FHIR spec allows for you to define your own IDs when using "update as create". This is when you create a new resource in the server, but use a PUT (update) request to the ID you want to create, such as Patient/1, instead of a POST (create) request to the resource URL. The server should return a 201 Created status instead of 200 OK. For more information see https://hl7.org/fhir/http.html#upsert
Not every FHIR server supports this, but if AWS does this is likely how it would work. The field in the CapabilityStatement for this feature is CapabilityStatement.rest.resource.updateCreate
EDIT:
This is possible by modifying the parameters passed to the DynamoDbDataService constructor in the deployment repo's src/config.ts
By default supportUpdateCreate, the second parameter, is set to false
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, false, { enableMultiTenancy });
but you can set it to true to enable this functionality
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, true, { enableMultiTenancy });

How to get google cloud project number programmaticaly?

I want to use Google Secret Manager in my project. To access a saved secret it is necessary to provide a secret name which contains Google project number. It will be convinient to get this number proramatically to form secret name and no to save it in the enviroment variable. I use node.js runtime for my project. I know there is a library google-auth-library which allow to get project id. Is it possible to get project number somehow?
You can access secrets by project_id or project_number. The following are both valid resource IDs that point to the same secret:
projects/my-project/secrets/my-secret
projects/1234567890/secrets/my-secret
You can get metadata, including project_id and project_number from the metadata service. There are many default values. The ones you're looking for are numeric-project-id and project-id.
Here is an example using curl to access the metadata service. You would run this inside your workload, typically during initial boot:
curl "https://metadata.google.internal/computeMetadata/v1/project/project-id" \
--header "Metadata-Flavor: Google"
Note: the Metadata-Flavor: Google header is required.
To access these values from Node, you can construct your own http client. Alternatively, you can use the googleapis/gcp-metadata package:
const gcpMetadata = require('gcp-metadata');
async function projectID() {
const id = await gcpMetadata.project('project-id');
return id
}
You can send a GET request to the Resource Manager API
https://cloudresourcemanager.googleapis.com/v1/projects/PROJECT_ID?alt=json
Not sure if the following method can be useful in your case, but I put it here, just in case:
gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)"
it should return the project number based on the project identifier (in the PROJECT_ID variable), under assumption, that a user (or a service account) who/which runs that command has relevant permissions.
If you're doing this from outside a Cloud VM, so that the metadata service is not available, you can use the Resource Manager API to convert the project name to project number:
const {ProjectsClient} = require('#google-cloud/resource-manager').v3;
const resourcemanagerClient = new ProjectsClient();
let projectId = 'your-project-id-123'; // TODO: replace with your project ID
const [response] = await resourcemanagerClient.getProject({name: projectId});
let projectNumber = response.name.split('/')[1];

How get a metric sample from monitoring APIs

I took a look very carefully to monitoring API. As far as I have read, it is possible to use gcloud for creating Monitoring Policies and edit the Policies ( Using Aleert API).
Nevertheless, from one hand it seems gcloud is able only to create and edit policies options not for reading the result from such policies. From this page I read this options:
Creating new policies
Deleting existing policies
Retrieving specific policies
Retrieving all policies
Modifying existing policies
On another hand I read from result of a failed request
Summary of the result of a failed request to write data to a time series.
So it rings a bell in my mind that I do can get a list of results like all failed request to write during some period. But how?
Please, my straigh question is: can I somehow either listen alert events or get a list of alert reults throw Monitoring API v3?.
I see tag_firestore_instance somehow related to firestore but how to use it and which information can I search for? I can't find anywhere how to use it. Maybe as common get (eg. Postman/curl) or from gcloud shell.
PS.: This question was originally posted in Google Group but I was encoraged to ask here.
*** Edited after Alex's suggestion
I have an Angular page listening a document from my Firestore database
export class AppComponent {
public transfers: Observable<any[]>;
transferCollectionRef: AngularFirestoreCollection<any>;
constructor(public auth: AngularFireAuth, public db: AngularFirestore) {
this.listenSingleTransferWithToken();
}
async listenSingleTransferWithToken() {
await this.auth.signInWithCustomToken("eyJ ... CVg");
this.transferCollectionRef = this.db.collection<any>('transfer', ref => ref.where("id", "==", "1"));
this.transfers = this.transferCollectionRef.snapshotChanges().map(actions => {
return actions.map(action => {
const data = action.payload.doc.data();
const id = action.payload.doc.id;
return { id, ...data };
});
});
}
}
So, I understand there is at least one reader count to return from
name: projects/firetestjimis
filter: metric.type = "firestore.googleapis.com/document/read_count"
interval.endTime: 2020-05-07T15:09:17Z
It was a little difficult to follow what you were saying, but here's what I've figured out.
This is a list of available Firestore metrics: https://cloud.google.com/monitoring/api/metrics_gcp#gcp-firestore
You can then pass these metric types to this API
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
On that page, I used the "Try This API" tool on the right side and filled in the following
name = projects/MY-PROJECT-ID
filter = metric.type = "firestore.googleapis.com/api/request_count"
interval.endTime = 2020-05-05T15:01:23.045123456Z
In chrome's inspector, i can see that this is the GET request that the tool made:
https://content-monitoring.googleapis.com/v3/projects/MY-PROJECT-ID/timeSeries?filter=metric.type%20%3D%20%22firestore.googleapis.com%2Fapi%2Frequest_count%22&interval.endTime=2020-05-05T15%3A01%3A23.045123456Z&key=API-KEY-GOES-HERE
EDIT:
The above returned 200, but with an empty json payload.
We also needed to add the following entry to get data to populate
interval.startTime = 2020-05-04T15:01:23.045123456Z
Also try going here console.cloud.google.com/monitoring/metrics-explorer and type firestore in the "Find resource type and metric" box and see if google's own dashboards has data populating. (This is to confirm that there is actually data there for you to fetch)

Error while accessing DAX aws from localhost client

Getting Error while accessing DAX AWS from localhost client
Error:
EVERE: caught exception during cluster refresh: java.io.IOException: failed to configure cluster endpoints from hosts: [daxcluster*:8111]
java.io.IOException: failed to configure cluster endpoints from hosts:
Sample test code
public static String clientEndPoint = "*.amazonaws.com:8111";
DynamoDB getDynamoDBClient() {
System.out.println("Creating a DynamoDB client");
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().withRegion(Regions.US_EAST_1).build();
return new DynamoDB(client);
}
static DynamoDB getDaxClient(String daxEndpoint) {
ClientConfig daxConfig = new ClientConfig().withEndpoints(daxEndpoint);
daxConfig.setRegion(Regions.US_EAST_1.getName());
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
DynamoDB docClient = new DynamoDB(client);
return docClient;
}
public static void main(String args[]) {
DynamoDB client = getDaxClient(clientEndPoint);
Table table = client.getTable("dev.Users");
Item fa = table.getItem(new GetItemSpec().withPrimaryKey("userid", "tf#gmail.com"));
System.out.println(fa);
}
A DAX cluster runs within your VPC. To connect from your laptop to the DAX cluster, you need to VPN into your VPC: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html
Answer: DAX is only supported within VPC
I faced this same issue myself and found out the hard way.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html
Usage Notes For a list of AWS regions where DAX is available, refer to
https://aws.amazon.com/dynamodb/pricing.
DAX supports applications written in Java, Node.js, .Python and .NET,
using AWS-provided clients for those programming languages.
DAX does not support Transport Layer Security (TLS).
DAX is only available for the EC2-VPC platform. (There is no support
for the EC2-Classic platform.)
DAX clusters maintain metadata about the attribute names of items they
store, and that metadata is maintained indefinitely (even after the
item has expired or been evicted from the cache). Applications that
use an unbounded number of attribute names can, over time, cause
memory exhaustion in the DAX cluster. This limitation applies only to
top-level attribute names, not nested attribute names. Examples of
problematic top-level attribute names include timestamps, UUIDs, and
session IDs.
Note that this limitation only applies to attribute names, not their
values. Items like this are not a problem: