Dataflow job is failing - google-cloud-platform

I'm trying to trigger a dataflow job to process a csv file then save the data into postgresql.
the pipeline is written in Java
this is my pipeline code:
public class DataMappingService {
DataflowPipelineOptions options;
Pipeline pipeline;
private String jdbcUrl;
private String DB_NAME;
private String DB_PRIVATE_IP;
private String DB_USERNAME;
private String DB_PASSWORD;
private String PROJECT_ID;
private String SERVICE_ACCOUNT;
private static final String SQL_INSERT = "INSERT INTO upfit(upfitter_id, model_number, upfit_name, upfit_description, manufacturer, length, width, height, dimension_unit, weight, weight_unit, color, price, stock_number) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
public DataMappingService() {
DB_NAME = System.getenv("DB_NAME");
DB_PRIVATE_IP = System.getenv("DB_PRIVATEIP");
DB_USERNAME = System.getenv("DB_USERNAME");
DB_PASSWORD = System.getenv("DB_PASSWORD");
PROJECT_ID = System.getenv("GOOGLE_PROJECT_ID");
SERVICE_ACCOUNT = System.getenv("SERVICE_ACCOUNT_EMAIL");
jdbcUrl = "jdbc:postgresql://" + DB_PRIVATE_IP + ":5432/" + DB_NAME;
System.out.println("jdbcUrl: " + jdbcUrl);
System.out.println("dbUsername: " + DB_USERNAME);
System.out.println("dbPassword: " + DB_PASSWORD);
System.out.println("dbName: " + DB_NAME);
System.out.println("projectId: " + PROJECT_ID);
System.out.println("service account: " + SERVICE_ACCOUNT);
options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
options.setRunner(DataflowRunner.class);
options.setProject(PROJECT_ID);
options.setServiceAccount(SERVICE_ACCOUNT);
options.setWorkerRegion("us-east4");
options.setTempLocation("gs://upfit_data_flow_bucket/temp");
options.setStagingLocation("gs://upfit_data_flow_bucket/binaries");
options.setRegion("us-east4");
options.setSubnetwork("regions/us-east-4/subnetworks/us-east4-public");
options.setMaxNumWorkers(3);
}
public void processData(String gcsFilePath) {
try {
pipeline = Pipeline.create(options);
System.out.println("pipelineOptions: " +pipeline.getOptions());
pipeline.apply("ReadLines", TextIO.read().from(gcsFilePath))
.apply("SplitLines", new SplitLines())
.apply("SplitRecords", new SplitRecord())
.apply("ReadUpfits", new ReadUpfits());
.apply("write upfits", JdbcIO.<Upfit>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
"org.postgresql.Driver", jdbcUrl)
.withUsername(DB_USERNAME)
.withPassword(DB_PASSWORD))
.withStatement(SQL_INSERT)
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<Upfit>() {
private static final long serialVersionUID = 1L;
#Override
public void setParameters(Upfit element, PreparedStatement query) throws SQLException {
query.setInt(1, element.getUpfitterId());
query.setString(2, element.getModelNumber());
query.setString(3, element.getUpfitName());
query.setString(4, element.getUpfitDescription());
query.setString(5, element.getManufacturer());
query.setString(6, element.getLength());
query.setString(7, element.getWidth());
query.setString(8, element.getHeight());
query.setString(9, element.getDimensionsUnit());
query.setString(10, element.getWeight());
query.setString(11, element.getWeightUnit());
query.setString(12, element.getColor());
query.setString(13, element.getPrice());
query.setInt(14, element.getStockAmmount());
}
}
));
pipeline.run();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I have created a user managed service account as the documentation says: https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#specifying_a_user-managed_worker_service_account and I'm providing the service account email in the pipeline options.
The service account has the following roles:
roles/dataflow.worker
roles/storage.admin
iam.serviceAccounts.actAs
roles/dataflow.admin
Service Account Token Creator
when I upload the csv file, the pipeline is triggered but I'm getting the following error:
Workflow failed. Causes: Subnetwork https://www.googleapis.com/compute/v1/projects/project-name/regions/us-east-4/subnetworks/us-east4-public is not accessible to Dataflow Service account or does not exist
I know that the subnetwork exists so I'm assuming its a permission error. The vpc network is created by my organization as we're not allowed to create our own.

Related

AWS Redshift serverless - how to get the cluster id value

I'm following the AWS documentation about how to connect to redshift [generating user credentials][1]
But the get-cluster-credentials API requires a cluster id parameter, which i don't have for a serverless endpoint. What id should I use?
EDIT:
[![enter image description here][2]][2]
This is the screen of a serverless endpoint dashboard. There is no cluster ID.
[1]: https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html
[2]: https://i.stack.imgur.com/VzvIs.png
Look at this Guide (a newer one) that talks about Connecting to Amazon Redshift Serverless. https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html
See this information that answers your question:
Connecting to the serverless endpoint with the Data API
You can also use the Amazon Redshift Data API to connect to serverless endpoint. Leave off the cluster-identifier parameter in your AWS CLI calls to route your query to serverless endpoint.
UPDATE
I wanted to test this to make sure that a successful connection can be made. I followed this doc to setup a Serverless instance.
Get started with Amazon Redshift Serverless
I loaded sample data and now have this.
Now I attemped to connect to it using software.amazon.awssdk.services.redshiftdata.RedshiftDataClient.
The Java V2 code:
try {
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.database(database)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
return response.id();
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
Notice there is no cluster id or user. Only a database name (sample_data_dev). The call worked perfectly.
HEre is the full code example that successfully queries data from a serverless instance using the AWS SDK for Java V2.
package com.example.redshiftdata;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.redshiftdata.model.*;
import software.amazon.awssdk.services.redshiftdata.RedshiftDataClient;
import software.amazon.awssdk.services.redshiftdata.model.DescribeStatementRequest;
import java.util.List;
/**
* To run this Java V2 code example, ensure that you have setup your development environment, including your credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class RetrieveDataServerless {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" RetrieveData <database> <sqlStatement> \n\n" +
"Where:\n" +
" database - the name of the database (for example, sample_data_dev). \n" +
" sqlStatement - the sql statement to use. \n" ;
String database = "sample_data_dev" ;
String sqlStatement = "Select * from tickit.sales" ;
Region region = Region.US_WEST_2;
RedshiftDataClient redshiftDataClient = RedshiftDataClient.builder()
.region(region)
.build();
String id = performSQLStatement(redshiftDataClient, database, sqlStatement);
System.out.println("The identifier of the statement is "+id);
checkStatement(redshiftDataClient,id );
getResults(redshiftDataClient, id);
redshiftDataClient.close();
}
public static void checkStatement(RedshiftDataClient redshiftDataClient,String sqlId ) {
try {
DescribeStatementRequest statementRequest = DescribeStatementRequest.builder()
.id(sqlId)
.build() ;
// Wait until the sql statement processing is finished.
boolean finished = false;
String status = "";
while (!finished) {
DescribeStatementResponse response = redshiftDataClient.describeStatement(statementRequest);
status = response.statusAsString();
System.out.println("..."+status);
if (status.compareTo("FINISHED") == 0) {
break;
}
Thread.sleep(1000);
}
System.out.println("The statement is finished!");
} catch (RedshiftDataException | InterruptedException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
public static String performSQLStatement(RedshiftDataClient redshiftDataClient,
String database,
String sqlStatement) {
try {
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.database(database)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
return response.id();
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
public static void getResults(RedshiftDataClient redshiftDataClient, String statementId) {
try {
GetStatementResultRequest resultRequest = GetStatementResultRequest.builder()
.id(statementId)
.build();
GetStatementResultResponse response = redshiftDataClient.getStatementResult(resultRequest);
// Iterate through the List element where each element is a List object.
List<List<Field>> dataList = response.records();
// Print out the records.
for (List list: dataList) {
for (Object myField:list) {
Field field = (Field) myField;
String value = field.stringValue();
if (value != null)
System.out.println("The value of the field is " + value);
}
}
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}

Get IAM Role information programatically using Scala

I want to receive the role information for a role name. For example getting the exact ARN identifier.
Somehow this code below is not working. Sadly there is no error message in cloudwatch
import software.amazon.awssdk.services.iam.*;
import com.amazonaws.services.identitymanagement.model._
import com.amazonaws.services.identitymanagement.{AmazonIdentityManagementClient, AmazonIdentityManagement, AmazonIdentityManagementClientBuilder}
// ....
val iamClient = AmazonIdentityManagementClient
.builder()
.withRegion("eu-central-1")
.build()
val roleRequest = new GetRoleRequest();
roleRequest.setRoleName("InfrastructureStack-StandardRoleD-HBLE12VPTWQ")
val result = iamClient.getRole(roleRequest) // <-- Nothing happens after this line
println("wont execute this println statement")
Other services like CognitoIdentityProvider are working perfectly fine.
I also tried the builder pattern for the GetRoleRequest and IamClient.
I got this IAM V2 code working fine. As stated in my comment, setup your dev environment to use AWS SDK for Java V2.
package com.example.iam;
import software.amazon.awssdk.services.iam.model.*;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.iam.IamClient;
public class GetRole {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" <policyArn> \n\n" +
"Where:\n" +
" policyArn - a policy ARN that you can obtain from the AWS Management Console. \n\n" ;
// if (args.length != 1) {
// System.out.println(USAGE);
//// System.exit(1);
// }
String roleName = "DynamoDBAutoscaleRole" ; //args[0];
Region region = Region.AWS_GLOBAL;
IamClient iam = IamClient.builder()
.region(region)
.build();
getRoleInformation(iam, roleName);
System.out.println("Done");
iam.close();
}
public static void getRoleInformation(IamClient iam, String roleName) {
try {
GetRoleRequest roleRequest = GetRoleRequest.builder()
.roleName(roleName)
.build();
GetRoleResponse response = iam.getRole(roleRequest) ;
System.out.println("The ARN of the role is " +response.role().arn());
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
}
Output:

How to connect different Regional S3 Bucket from a Spring Boot Application?

I have a Spring Boot application that has a POST end-point that accepts 2 types of files. Based on the file category, I need to write them to S3 buckets which are in different regions. Example: Category 1 file should be written to Frankfurt (eu-central-1) and Category 2 file should be written to Ohio (us-east-2) S3 buckets. Spring boot accepts a static region (cloud.aws.region.static=eu-central-1) through property configuration and the connection is established when starting the Spring boot so the AmazoneS3 Client Bean is already created with a connection to Frankfurt itself.
I need to containerize this entire setup and deploy it in a K8 Pod.
What is the recommendation for establishing connections and writing objects to different regional buckets? How do I need to implement this? Looking for a dynamic region finding solution rather statically created Bean per region.
Below is a working piece of code that connects to Frankfurt bucket and PUT the object.
#Service
public class S3Service {
#Autowired
private AmazonS3 amazonS3Client;
public void putObject(MultipartFile multipartFile) {
ObjectMetadata objectMetaData = new ObjectMetadata();
objectMetaData.setContentType(multipartFile.getContentType());
objectMetaData.setContentLength(multipartFile.getSize());
try {
PutObjectRequest putObjectRequest = new PutObjectRequest("example-bucket", multipartFile.getOriginalFilename(), multipartFile.getInputStream(), objectMetaData);
this.amazonS3Client.putObject(putObjectRequest);
} catch (IOException e) {
/* Handle Exception */
}
}
}
Updated Code (20/08/2021)
#Component
public class AmazoneS3ConnectionFactory {
private static final Logger LOGGER = LoggerFactory.getLogger(AmazoneS3ConnectionFactory.class);
#Value("${example.aws.s3.regions}")
private String[] regions;
#Autowired
private DefaultListableBeanFactory beanFactory;
#Autowired
private AWSCredentialsProvider credentialProvider;
#PostConstruct
public void init() {
for(String region: this.regions) {
String amazonS3BeanName = region + "_" + "amazonS3";
if (!this.beanFactory.containsBean(amazonS3BeanName)) {
AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard().withPathStyleAccessEnabled(true)
.withCredentials(this.credentialProvider).withRegion(region).withChunkedEncodingDisabled(true);
AmazonS3 awsS3 = builder.build();
this.beanFactory.registerSingleton(amazonS3BeanName, awsS3);
LOGGER.info("Bean " + amazonS3BeanName + " - Not exist. Created a bean and registered the same");
}
}
}
/**
* Returns {#link AmazonS3} for a region. Uses the default {#link AWSCredentialsProvider}
*/
public AmazonS3 getConnection(String region) {
String amazonS3BeanName = region + "_" + "amazonS3";
return (AmazonS3Client)this.beanFactory.getBean(amazonS3BeanName, AmazonS3.class);
}
}
My Service layer will call the "getConnection()" and get the AmazonS3 Object to operate on it.
The only option that I am aware is to create different S3Client with S3ClientBuilder, one for each different region. You would need to register them as Spring Beans with different names so that you can later autowire them.
Update (19/08/2021)
The following should work (sorry for the Kotlin code but it is faster to write):
Class that may contain your configuration for each region.
class AmazonS3Properties(val accessKeyId: String,
val secretAccessKey: String,
val region: String,
val bucket: String)
Configuration for S3 that will create 2 S3Clients and stored the buckets for each region (later needed).
#Configuration
class AmazonS3Configuration(private val s3Properties: Map<String, AmazonS3Properties>) {
lateinit var buckets: Map<String, String>
#PostConstruct
fun init() {
buckets = s3Properties.mapValues { it.bucket }
}
#Bean(name = "regionA")
fun regionA(): S3Client {
val regionAProperties = s3Properties["region-a"]
val awsCredentials = AwsBasicCredentials.create(regionAProperties.accessKeyId, regionAProperties.secretAccessKey)
return S3Client.builder().region(Region.of(regionAProperties.region)).credentialsProvider { awsCredentials }.build()
}
#Bean(name = "regionB")
fun regionB(): S3Client {
val regionBProperties = s3Properties["region-b"]
val awsCredentials = AwsBasicCredentials.create(regionBProperties.accessKeyId, regionBProperties.secretAccessKey)
return S3Client.builder().region(Region.of(regionBProperties.region)).credentialsProvider { awsCredentials }.build()
}
}
Service that will target one of the regions (Region A)
#Service
class RegionAS3Service(private val amazonS3Configuration: AmazonS3Configuration,
#field:Qualifier("regionA") private val amazonS3Client: S3Client) {
fun save(region: String, byteArrayOutputStream: ByteArrayOutputStream) {
val inputStream = ByteArrayInputStream(byteArrayOutputStream.toByteArray())
val contentLength = byteArrayOutputStream.size().toLong()
amazonS3Client.putObject(PutObjectRequest.builder().bucket(amazonS3Configuration.buckets[region]).key("whatever-key").build(), RequestBody.fromInputStream(inputStream, contentLength))
}
}

How to manage secret rotation used by spring boot app running on ECS in AWS cloud

My organization is running spring boot app on AWS ECS docker container which reads the credentials for Postgres sql from secrets manager in AWS during boot up. AS part of security complaince, we are rotating the secrets every 3 months. The spring boot app is loosing connection with the database and going down when the RDS credentials are rotated.we have to restart it in order to pick the new credentials to work properly. Is there any way I can read the credentials automatically once the credentials are rotated to avoid restarting the application manually?
After some research I found that the postgres database in AWS supports passwordless authentication using IAM roles. We can generate a token which is valid for 15 mins and can connect to database using that token. I prefer this way of connecting to database rather than using password for my database. More details about setting up password less authentication can be found here
Code example as below
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.services.rds.auth.GetIamAuthTokenRequest;
import com.amazonaws.services.rds.auth.RdsIamAuthTokenGenerator;
import org.apache.commons.lang3.StringUtils;
import org.apache.tomcat.jdbc.pool.ConnectionPool;
import org.apache.tomcat.jdbc.pool.PoolConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.URI;
import java.net.URISyntaxException;
import java.sql.SQLException;
import java.util.Properties;
public class RdsIamAuthDataSource extends org.apache.tomcat.jdbc.pool.DataSource {
private static final Logger LOGGER = LoggerFactory.getLogger(RdsIamAuthDataSource.class);
private static final int DEFAULT_PORT = 5432;
private static final String USESSL = "useSSL";
private static final String REQUIRE_SSL = "requireSSL";
private static final String BOOLEAN_TRUE = "true";
private static final String VERIFY_SERVER_CERTIFICATE = "verifyServerCertificate";
private static final String THREAD_NAME = "RdsIamAuthDataSourceTokenThread";
/**
* Constructor for RdsIamAuthDataSource.
* #param props {#link PoolConfiguration}
*/
public RdsIamAuthDataSource(PoolConfiguration props) {
this.poolProperties = props;
}
#Override
public ConnectionPool createPool() throws SQLException {
if (pool == null) {
return createPoolImpl();
} else {
return pool;
}
}
protected ConnectionPool createPoolImpl() throws SQLException {
synchronized (this) {
return pool = new RdsIamAuthConnectionPool(poolProperties);
}
}
private class RdsIamAuthConnectionPool extends ConnectionPool implements Runnable {
private RdsIamAuthTokenGenerator rdsIamAuthTokenGenerator;
private String host;
private String region;
private int port;
private String username;
private Thread tokenThread;
/**
* Constructor for RdsIamAuthConnectionPool.
* #param prop {#link PoolConfiguration}
* #throws SQLException {#link SQLException}
*/
public RdsIamAuthConnectionPool(PoolConfiguration prop) throws SQLException {
super(prop);
}
#Override
protected void init(PoolConfiguration prop) throws SQLException {
try {
final URI uri = new URI(prop.getUrl().substring(5));
this.host = uri.getHost();
this.port = uri.getPort();
if (this.port < 0) {
this.port = DEFAULT_PORT;
}
this.region = StringUtils.split(this.host,'.')[2];
this.username = prop.getUsername();
this.rdsIamAuthTokenGenerator = RdsIamAuthTokenGenerator.builder()
.credentials(new DefaultAWSCredentialsProviderChain())
.region(this.region)
.build();
updatePassword(prop);
final Properties props = prop.getDbProperties();
props.setProperty(USESSL, BOOLEAN_TRUE);
props.setProperty(REQUIRE_SSL, BOOLEAN_TRUE);
props.setProperty(VERIFY_SERVER_CERTIFICATE, BOOLEAN_TRUE);
super.init(prop);
this.tokenThread = new Thread(this, THREAD_NAME);
this.tokenThread.setDaemon(true);
this.tokenThread.start();
} catch (URISyntaxException e) {
LOGGER.error("Database URL is not correct. Please verify", e);
throw new RuntimeException(e.getMessage());
}
}
/**
* Refresh the token every 12 minutes.
*/
#Override
public void run() {
try {
while (this.tokenThread != null) {
Thread.sleep(12 * 60 * 1000);
updatePassword(getPoolProperties());
}
} catch (InterruptedException e) {
LOGGER.error("Background token thread interrupted", e);
}
}
#Override
protected void close(boolean force) {
super.close(force);
final Thread thread = tokenThread;
if (thread != null) {
thread.interrupt();
}
}
private void updatePassword(PoolConfiguration props) {
final String token = rdsIamAuthTokenGenerator.getAuthToken(GetIamAuthTokenRequest.builder()
.hostname(host)
.port(port)
.userName(this.username)
.build());
LOGGER.info("Updated IAM token for connection pool");
props.setPassword(token);
}
}
}
Supply the following DataSource as a spring bean. That's it. Now your application will automatically refresh credentials every 12 minutes
#Bean
public DataSource dataSource() {
final PoolConfiguration props = new PoolProperties();
props.setUrl("jdbc:postgresql://myapp.us-east-2.rds.amazonaws.com/myschema?ssl=true");
props.setUsername("rdsadminuser");
props.setDriverClassName("org.somedatabase.Driver");
return new RdsIamAuthDataSource(props);
}

Instance created via Service Account unable to use Google Cloud Speech API - authentication error

I followed Google's Quick-Start documentation for the Speech API to enable billing and API for an account. This account has authorized a service account to create Compute instances on its behalf. After creating an instance on the child account, hosting a binary to use the Speech API, I am unable to successfully use the example C# code provided by Google in the C# speech example:
try
{
var speech = SpeechClient.Create();
var response = speech.Recognize(new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
LanguageCode = "en"
}, RecognitionAudio.FromFile(audioFiles[0]));
foreach (var result in response.Results)
{
foreach (var alternative in result.Alternatives)
{
Debug.WriteLine(alternative.Transcript);
}
}
} catch (Exception ex)
// ...
}
Requests fail on the SpeechClient.Create() line with the following error:
--------------------------- Grpc.Core.RpcException: Status(StatusCode=Unauthenticated, Detail="Exception occured in
metadata credentials plugin.")
at
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task
task)
at
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task
task)
at Grpc.Core.Internal.AsyncCall`2.UnaryCall(TRequest msg)
at
Grpc.Core.Calls.BlockingUnaryCall[TRequest,TResponse](CallInvocationDetails`2
call, TRequest req)
at
Grpc.Core.DefaultCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2
method, String host, CallOptions options, TRequest request)
at
Grpc.Core.Internal.InterceptingCallInvoker.BlockingUnaryCall[TRequest,TResponse](Method`2
method, String host, CallOptions options, TRequest request)
at
Google.Cloud.Speech.V1.Speech.SpeechClient.Recognize(RecognizeRequest
request, CallOptions options)
at
Google.Api.Gax.Grpc.ApiCall.<>c__DisplayClass0_0`2.b__1(TRequest
req, CallSettings cs)
at
Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass1_0`2.b__0(TRequest
request, CallSettings callSettings)
at Google.Api.Gax.Grpc.ApiCall`2.Sync(TRequest request,
CallSettings perCallCallSettings)
at
Google.Cloud.Speech.V1.SpeechClientImpl.Recognize(RecognizeRequest
request, CallSettings callSettings)
at Google.Cloud.Speech.V1.SpeechClient.Recognize(RecognitionConfig
config, RecognitionAudio audio, CallSettings callSettings)
at Rc2Solver.frmMain.RecognizeWordsGoogleSpeechApi() in
C:\Users\jorda\Google
Drive\VSProjects\Rc2Solver\Rc2Solver\frmMain.cs:line 1770
--------------------------- OK
I have verified that the Speech API is activated. Here is the scope that the service account uses when creating the Compute instances:
credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(me)
{
Scopes = new[] { ComputeService.Scope.Compute, ComputeService.Scope.CloudPlatform }
}.FromPrivateKey(yk)
);
I have found no information or code online about specifically authorizing or authenticating the Speech API for service account actors. Any help is appreciated.
It turns out the issue was that the Cloud Compute instances needed to be created with a ServiceAccount parameter specified. Otherwise the Cloud instances were not part of a ServiceAccount default credential, which is referenced by the SpeechClient.Create() call. Here is the proper way to create an instance attached to a service account, and it will use the SA tied to the project ID:
service = new ComputeService(new BaseClientService.Initializer() {
HttpClientInitializer = credential,
ApplicationName = "YourAppName"
});
string MyProjectId = "example-project-27172";
var project = await service.Projects.Get(MyProjectId).ExecuteAsync();
ServiceAccount servAcct = new ServiceAccount() {
Email = project.DefaultServiceAccount,
Scopes = new [] {
"https://www.googleapis.com/auth/cloud-platform"
}
};
Instance instance = new Instance() {
MachineType = service.BaseUri + MyProjectId + "/zones/" + targetZone + "/machineTypes/" + "g1-small",
Name = name,
Description = name,
Disks = attachedDisks,
NetworkInterfaces = networkInterfaces,
ServiceAccounts = new [] {
servAcct
},
Metadata = md
};
batchRequest.Queue < Instance > (service.Instances.Insert(instance, MyProjectId, targetZone),
(content, error, i, message) => {
if (error != null) {
AddEventMsg("Error creating instance " + name + ": " + error.ToString());
} else {
AddEventMsg("Instance " + name + " created");
}
});