Instance created via Service Account unable to use Google Cloud Speech API - authentication error - google-cloud-platform

I followed Google's Quick-Start documentation for the Speech API to enable billing and API for an account. This account has authorized a service account to create Compute instances on its behalf. After creating an instance on the child account, hosting a binary to use the Speech API, I am unable to successfully use the example C# code provided by Google in the C# speech example:
try
{
var speech = SpeechClient.Create();
var response = speech.Recognize(new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
LanguageCode = "en"
}, RecognitionAudio.FromFile(audioFiles[0]));
foreach (var result in response.Results)
{
foreach (var alternative in result.Alternatives)
{
Debug.WriteLine(alternative.Transcript);
}
}
} catch (Exception ex)
// ...
}
Requests fail on the SpeechClient.Create() line with the following error:
--------------------------- Grpc.Core.RpcException: Status(StatusCode=Unauthenticated, Detail="Exception occured in
metadata credentials plugin.")
at
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task
task)
at
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task
task)
at Grpc.Core.Internal.AsyncCall`2.UnaryCall(TRequest msg)
at
Grpc.Core.Calls.BlockingUnaryCall[TRequest,TResponse](CallInvocationDetails`2
call, TRequest req)
at
Grpc.Core.DefaultCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2
method, String host, CallOptions options, TRequest request)
at
Grpc.Core.Internal.InterceptingCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2
method, String host, CallOptions options, TRequest request)
at
Google.Cloud.Speech.V1.Speech.SpeechClient.Recognize(RecognizeRequest
request, CallOptions options)
at
Google.Api.Gax.Grpc.ApiCall.<>c__DisplayClass0_0`2.b__1(TRequest
req, CallSettings cs)
at
Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass1_0`2.b__0(TRequest
request, CallSettings callSettings)
at Google.Api.Gax.Grpc.ApiCall`2.Sync(TRequest request,
CallSettings perCallCallSettings)
at
Google.Cloud.Speech.V1.SpeechClientImpl.Recognize(RecognizeRequest
request, CallSettings callSettings)
at Google.Cloud.Speech.V1.SpeechClient.Recognize(RecognitionConfig
config, RecognitionAudio audio, CallSettings callSettings)
at Rc2Solver.frmMain.RecognizeWordsGoogleSpeechApi() in
C:\Users\jorda\Google
Drive\VSProjects\Rc2Solver\Rc2Solver\frmMain.cs:line 1770
--------------------------- OK
I have verified that the Speech API is activated. Here is the scope that the service account uses when creating the Compute instances:
credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(me)
{
Scopes = new[] { ComputeService.Scope.Compute, ComputeService.Scope.CloudPlatform }
}.FromPrivateKey(yk)
);
I have found no information or code online about specifically authorizing or authenticating the Speech API for service account actors. Any help is appreciated.

It turns out the issue was that the Cloud Compute instances needed to be created with a ServiceAccount parameter specified. Otherwise the Cloud instances were not part of a ServiceAccount default credential, which is referenced by the SpeechClient.Create() call. Here is the proper way to create an instance attached to a service account, and it will use the SA tied to the project ID:
service = new ComputeService(new BaseClientService.Initializer() {
HttpClientInitializer = credential,
ApplicationName = "YourAppName"
});
string MyProjectId = "example-project-27172";
var project = await service.Projects.Get(MyProjectId).ExecuteAsync();
ServiceAccount servAcct = new ServiceAccount() {
Email = project.DefaultServiceAccount,
Scopes = new [] {
"https://www.googleapis.com/auth/cloud-platform"
}
};
Instance instance = new Instance() {
MachineType = service.BaseUri + MyProjectId + "/zones/" + targetZone + "/machineTypes/" + "g1-small",
Name = name,
Description = name,
Disks = attachedDisks,
NetworkInterfaces = networkInterfaces,
ServiceAccounts = new [] {
servAcct
},
Metadata = md
};
batchRequest.Queue < Instance > (service.Instances.Insert(instance, MyProjectId, targetZone),
(content, error, i, message) => {
if (error != null) {
AddEventMsg("Error creating instance " + name + ": " + error.ToString());
} else {
AddEventMsg("Instance " + name + " created");
}
});

Related

AWS Redshift serverless - how to get the cluster id value

I'm following the AWS documentation about how to connect to redshift [generating user credentials][1]
But the get-cluster-credentials API requires a cluster id parameter, which i don't have for a serverless endpoint. What id should I use?
EDIT:
[![enter image description here][2]][2]
This is the screen of a serverless endpoint dashboard. There is no cluster ID.
[1]: https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html
[2]: https://i.stack.imgur.com/VzvIs.png
Look at this Guide (a newer one) that talks about Connecting to Amazon Redshift Serverless. https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html
See this information that answers your question:
Connecting to the serverless endpoint with the Data API
You can also use the Amazon Redshift Data API to connect to serverless endpoint. Leave off the cluster-identifier parameter in your AWS CLI calls to route your query to serverless endpoint.
UPDATE
I wanted to test this to make sure that a successful connection can be made. I followed this doc to setup a Serverless instance.
Get started with Amazon Redshift Serverless
I loaded sample data and now have this.
Now I attemped to connect to it using software.amazon.awssdk.services.redshiftdata.RedshiftDataClient.
The Java V2 code:
try {
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.database(database)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
return response.id();
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
Notice there is no cluster id or user. Only a database name (sample_data_dev). The call worked perfectly.
HEre is the full code example that successfully queries data from a serverless instance using the AWS SDK for Java V2.
package com.example.redshiftdata;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.redshiftdata.model.*;
import software.amazon.awssdk.services.redshiftdata.RedshiftDataClient;
import software.amazon.awssdk.services.redshiftdata.model.DescribeStatementRequest;
import java.util.List;
/**
* To run this Java V2 code example, ensure that you have setup your development environment, including your credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class RetrieveDataServerless {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" RetrieveData <database> <sqlStatement> \n\n" +
"Where:\n" +
" database - the name of the database (for example, sample_data_dev). \n" +
" sqlStatement - the sql statement to use. \n" ;
String database = "sample_data_dev" ;
String sqlStatement = "Select * from tickit.sales" ;
Region region = Region.US_WEST_2;
RedshiftDataClient redshiftDataClient = RedshiftDataClient.builder()
.region(region)
.build();
String id = performSQLStatement(redshiftDataClient, database, sqlStatement);
System.out.println("The identifier of the statement is "+id);
checkStatement(redshiftDataClient,id );
getResults(redshiftDataClient, id);
redshiftDataClient.close();
}
public static void checkStatement(RedshiftDataClient redshiftDataClient,String sqlId ) {
try {
DescribeStatementRequest statementRequest = DescribeStatementRequest.builder()
.id(sqlId)
.build() ;
// Wait until the sql statement processing is finished.
boolean finished = false;
String status = "";
while (!finished) {
DescribeStatementResponse response = redshiftDataClient.describeStatement(statementRequest);
status = response.statusAsString();
System.out.println("..."+status);
if (status.compareTo("FINISHED") == 0) {
break;
}
Thread.sleep(1000);
}
System.out.println("The statement is finished!");
} catch (RedshiftDataException | InterruptedException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
public static String performSQLStatement(RedshiftDataClient redshiftDataClient,
String database,
String sqlStatement) {
try {
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.database(database)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
return response.id();
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
public static void getResults(RedshiftDataClient redshiftDataClient, String statementId) {
try {
GetStatementResultRequest resultRequest = GetStatementResultRequest.builder()
.id(statementId)
.build();
GetStatementResultResponse response = redshiftDataClient.getStatementResult(resultRequest);
// Iterate through the List element where each element is a List object.
List<List<Field>> dataList = response.records();
// Print out the records.
for (List list: dataList) {
for (Object myField:list) {
Field field = (Field) myField;
String value = field.stringValue();
if (value != null)
System.out.println("The value of the field is " + value);
}
}
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}

Cloud Functions / Cloud Tasks UNAUTHENTICATED error

I am trying to get a Cloud Function to create a Cloud Task that will invoke a Cloud Function. Easy.
The flow and use case are very close to the official tutorial here.
I also looked at this article by Doug Stevenson and in particular its security section.
No luck, I am consistently getting a 16 (UNAUTHENTICATED) error in Cloud Task.
If I can trust what I see in the console it seems that Cloud Task is not attaching the OIDC token to the request:
Yet, in my code I do have the oidcToken object:
const { v2beta3, protos } = require("#google-cloud/tasks");
import {
PROJECT_ID,
EMAIL_QUEUE,
LOCATION,
EMAIL_SERVICE_ACCOUNT,
EMAIL_HANDLER,
} from "./../config/cloudFunctions";
export const createHttpTaskWithToken = async function (
payload: {
to_email: string;
templateId: string;
uid: string;
dynamicData?: Record<string, any>;
},
{
project = PROJECT_ID,
queue = EMAIL_QUEUE,
location = LOCATION,
url = EMAIL_HANDLER,
email = EMAIL_SERVICE_ACCOUNT,
} = {}
) {
const client = new v2beta3.CloudTasksClient();
const parent = client.queuePath(project, location, queue);
// Convert message to buffer.
const convertedPayload = JSON.stringify(payload);
const body = Buffer.from(convertedPayload).toString("base64");
const task = {
httpRequest: {
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
url,
oidcToken: {
serviceAccountEmail: email,
audience: new URL(url).origin,
},
headers: {
"Content-Type": "application/json",
},
body,
},
};
try {
// Send create task request.
const request = { parent: parent, task: task };
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
return response.name;
} catch (error) {
if (error instanceof Error) console.error(Error(error.message));
return;
}
};
When logging the task object from the code above in Cloud Logging I can see that the service account is the one that I created for the purpose of this and that the Cloud Tasks are successfully created.
IAM:
And the function that the Cloud Task needs to invoke:
Everything seems to be there, in theory.
Any advice as to what I would be missing?
Thanks,
Your audience is incorrect. It must end by the function name. Here, you only have the region and the project https://<region>-<projectID>.cloudfunction.net/. Use the full Cloud Functions URL.

Problems with AWS SDK .NET

I am trying to retrieve images from my bucket to send to my mobile apps, I currently have the devices accessing AWS directly, however I am adding a layer of security and having my apps (IOS and Android) now make requests to my server which will then respond with DynamoDB and S3 data.
I am trying to follow the documentation and code samples provided by AWS for .Net and they worked seamlessly for DynamoDB, I am running into problems with S3.
S3 .NET Documentation
My problem is that if I provide no credentials, I get the error:
Failed to retrieve credentials from EC2 Instance Metadata Service
This is expected as I have IAM roles set up and only want my apps and this server (in the future, only this server) to have access to the buckets.
But when I provide the credentials, the same way I provided credentials for DynamoDB, my server waits forever and doesn't receive any responses from AWS.
Here is my C#:
<%# WebHandler Language="C#" Class="CheckaraRequestHandler" %>
using System;
using System.Web;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System.IO;
using System.Threading.Tasks;
public class CheckaraRequestHandler : IHttpHandler
{
private const string bucketName = "MY_BUCKET_NAME";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USEast1;
public static IAmazonS3 client = new AmazonS3Client("MY_ACCESS_KEY", "MY_SECRET_KEY", RegionEndpoint.USEast1);
public void ProcessRequest(HttpContext context)
{
if (context.Request.HttpMethod.ToString() == "GET")
{
string userID = context.Request.QueryString["User"];
string Action = context.Request.QueryString["Action"];
if (userID == null)
{
context.Response.ContentType = "text/plain";
context.Response.Write("TRY AGAIN!");
return;
}
if (Action == "GetPhoto")
{
ReadObjectDataAsync(userID).Wait();
}
var client = new AmazonDynamoDBClient("MY_ACCESS_KEY", "MY_SECRET_KEY", RegionEndpoint.USEast1);
Console.WriteLine("Getting list of tables");
var table = Table.LoadTable(client, "TABLE_NAME");
var item = table.GetItem(userID);
if (item != null)
{
context.Response.ContentType = "application/json";
context.Response.Write(item.ToJson());
}
else
{
context.Response.ContentType = "text/plain";
context.Response.Write("0");
}
}
}
public bool IsReusable
{
get
{
return false;
}
}
static async Task ReadObjectDataAsync(string userID)
{
string responseBody = "";
try
{
string formattedKey = userID + "/" + userID + "_PROFILEPHOTO.jpeg";
//string formattedKey = userID + "_PROFILEPHOTO.jpeg";
//formattedKey = formattedKey.Replace(":", "%3A");
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = formattedKey
};
using (GetObjectResponse response = await client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string title = response.Metadata["x-amz-meta-title"]; // Assume you have "title" as medata added to the object.
string contentType = response.Headers["Content-Type"];
Console.WriteLine("Object metadata, Title: {0}", title);
Console.WriteLine("Content type: {0}", contentType);
responseBody = reader.ReadToEnd(); // Now you process the response body.
}
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
}
}
When I debug, this line waits forever:
using (GetObjectResponse response = await client.GetObjectAsync(request))
This is the same line that throws the credentials error when I don't provide them. Is there something that I am missing here?
Any help would be greatly appreciated.
I suspect that the AWS .NET SDK has some isses with it specifically with the async call to S3.
The async call to dynamoDB works perfect, but the S3 one hangs forever.
What fixed my problem was simply removing the async functionality (even tho in the AWS docs, the async call is supposed to be used)
Before:
using (GetObjectResponse response = await client.GetObjectAsync(request))
After:
using (GetObjectResponse response = myClient.GetObject(request))
Hopefully this helps anyone else encountering this issue.

Manual authentication for Google API in jclouds, separating token acquisition

I need to separate the authentication phase from Google's Api creation, but it's very difficult (for me) to make it possible.
This is very important because I am creating a REST API that should receive the authorization tokens previously acquired and not the credentials directly from its users for security reasons, because with tokens I can set a lifetime limit as specified in RFC 6750.
I have the following code:
public class Main {
public static void main(String[] args) {
// Reads the JSON credential file provided by Google
String jsonContent = readJson(args[1]);
// Pass the credential content
GoogleComputeEngineApi googleApi =
createApi(jsonContent);
}
public static GoogleComputeEngineApi createApi(final String jsonCredentialContent) {
try {
Supplier<Credentials> credentialSupplier = new GoogleCredentialsFromJson(
jsonCredentialContent);
ComputeServiceContext context = ContextBuilder
.newBuilder("google-compute-engine")
.credentialsSupplier(credentialSupplier)
.buildView(ComputeServiceContext.class);
Credentials credentials = credentialSupplier.get();
ContextBuilder contextBuilder = ContextBuilder
.newBuilder(GoogleComputeEngineProviderMetadata.builder()
.build())
.credentials(credentials.identity, credentials.credential);
Injector injector = contextBuilder.buildInjector();
return injector.getInstance(GoogleComputeEngineApi.class);
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
return null;
}
}
}
Below is a fake code with my needs:
public class Main {
public static void main(String[] args) {
String jsonCredentialContent = readJson(args[1]);
String oauthToken = "";
// First acquires the OAuth token
if(getAuthenticationType("google-compute-engine").equals("oauth")) {
oauthToken = getTokenForOAuth(jsonCredentialContent);
}
// Creates the Api with the previously acquired token
GoogleComputeEngineApi googleApi =
createApi(oauthToken);
}
[...]
}
You can directly use the jclouds OAuth API to get the bearer token, as follows:
GoogleCredentialsFromJson credentials = new GoogleCredentialsFromJson(jsoncreds);
AuthorizationApi oauth = ContextBuilder.newBuilder("google-compute-engine")
.credentialsSupplier(credentials)
.buildApi(AuthorizationApi.class);
try {
long nowInSeconds = System.currentTimeMillis() / 1000;
Claims claims = Claims.create(
credentials.get().identity, // issuer
"https://www.googleapis.com/auth/compute", // write scope
"https://accounts.google.com/o/oauth2/token", // audience
nowInSeconds + 60, // token expiration (seconds)
nowInSeconds // current time (secods)
);
Token token = oauth.authorize(claims);
System.out.println(token);
} finally {
oauth.close();
}
Once you have the Bearer access token you can create the jclouds context with it as follows:
// Override GCE default Oauth flow (JWT) by the Bearer token flow
Properties overrides = new Properties();
overrides.put(OAuthProperties.CREDENTIAL_TYPE, CredentialType.BEARER_TOKEN_CREDENTIALS.toString());
// It is important to set the proper identity too, as it is used to resolve the GCE project
ComputeServiceContext ctx = ContextBuilder.newBuilder("google-compute-engine")
.overrides(overrides)
.credentials(credentials.get().identity, token.accessToken())
.buildView(ComputeServiceContext.class);
GoogleComputeEngineApi google = ctx.unwrapApi(GoogleComputeEngineApi.class);

Simple DB policy being ignored?

I'm trying to use AWS IAM to generate temporary tokens for a mobile app. I'm using the AWS C# SDK.
Here's my code...
The token generating service
public string GetIAMKey(string deviceId)
{
//fetch IAM key...
var credentials = new BasicAWSCredentials("MyKey", "MyAccessId");
var sts = new AmazonSecurityTokenServiceClient(credentials);
var tokenRequest = new GetFederationTokenRequest();
tokenRequest.Name = deviceId;
tokenRequest.Policy = File.ReadAllText(HostingEnvironment.MapPath("~/policy.txt"));
tokenRequest.DurationSeconds = 129600;
var tokenResult = sts.GetFederationToken(tokenRequest);
var details = new IAMDetails { SessionToken = tokenResult.GetFederationTokenResult.Credentials.SessionToken, AccessKeyId = tokenResult.GetFederationTokenResult.Credentials.AccessKeyId, SecretAccessKey = tokenResult.GetFederationTokenResult.Credentials.SecretAccessKey, };
return JsonConvert.SerializeObject(details);
}
The client
var iamkey = Storage.LoadPersistent<IAMDetails>("iamkey");
var simpleDBClient = new AmazonSimpleDBClient(iamkey.AccessKeyId, iamkey.SecretAccessKey, iamkey.SessionToken);
try
{
var details = await simpleDBClient.SelectAsync(new SelectRequest { SelectExpression = "select * from mydomain" });
return null;
}
catch (Exception ex)
{
Storage.ClearPersistent("iamkey");
}
The policy file contents
{ "Statement":[{ "Effect":"Allow", "Action":"sdb:* ", "Resource":"arn:aws:sdb:eu-west-1:* :domain/mydomain*" } ]}
I keep getting the following error...
User (arn:aws:sts::myaccountid:federated-user/654321) does not have permission to perform (sdb:Select) on resource (arn:aws:sdb:us-east-1:myaccountid:domain/mydomain)
Notice that my policy file clearly specifies two things
region should be eu-west-1
allowed action is a wild-card, ie, allow everything
But the exception thrown claims that my user doesn't have permission to us-east-1
Any ideas as to why I'm getting this error?
Ok figured it out.
You have to set the region endpoint on your call to the service from the client.
So
var simpleDBClient = new AmazonSimpleDBClient(iamkey.AccessKeyId, iamkey.SecretAccessKey, iamkey.SessionToken, Amazon.RegionEndpoint.EUWest1);