I'm trying to grab the latest secret version. Is there a way to do that without specifying the version number? Such as using the keyword "latest". I'm trying to avoid having to iterate through all the secret versions with a for loop as GCP documentation shows:
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
// Build the parent name.
SecretName projectName = SecretName.of(projectId, secretId);
// Get all versions.
ListSecretVersionsPagedResponse pagedResponse = client.listSecretVersions(projectName);
// List all versions and their state.
pagedResponse
.iterateAll()
.forEach(
version -> {
System.out.printf("Secret version %s, %s\n", version.getName(), version.getState());
});
}
Yes, you can use "latest" as the version number. This is called an "alias". At present, the only alias is "latest", but we may support more aliases in the future.
gcloud secrets versions access "latest" --secret "my-secret"
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
SecretVersionName secretVersionName = SecretVersionName.of(projectId, secretId, "latest"); // <-- here
// Access the secret version.
AccessSecretVersionResponse response = client.accessSecretVersion(secretVersionName);
String payload = response.getPayload().getData().toStringUtf8();
System.out.printf("Plaintext: %s\n", payload);
}
import com.google.cloud.secretmanager.v1.AccessSecretVersionResponse;
import com.google.cloud.secretmanager.v1.SecretManagerServiceClient;
import com.google.cloud.secretmanager.v1.SecretVersionName;
import java.io.IOException;
public class AccessSecretVersion {
public static void accessSecretVersion() throws IOException {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String secretId = "your-secret-id";
String versionId = "latest"; //<-- specify version
accessSecretVersion(projectId, secretId, versionId);
}
// Access the payload for the given secret version if one exists. The version
// can be a version number as a string (e.g. "5") or an alias (e.g. "latest").
public static void accessSecretVersion(String projectId, String secretId, String versionId)
throws IOException {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
SecretVersionName secretVersionName = SecretVersionName.of(projectId, secretId, versionId);
// Access the secret version.
AccessSecretVersionResponse response = client.accessSecretVersion(secretVersionName);
// Print the secret payload.
//
// WARNING: Do not print the secret in a production environment - this
// snippet is showing how to access the secret material.
String payload = response.getPayload().getData().toStringUtf8();
System.out.printf("Plaintext: %s\n", payload);
}
}
}
source: https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets#secretmanager-access-secret-version-java
Related
With Terraform GCP provider 4.30.0, I can now create an google maps api key and restrict it.
resource "google_apikeys_key" "maps-api-key" {
provider = google-beta
name = "maps-api-key"
display_name = "google-maps-api-key"
project = local.project_id
restrictions {
api_targets {
service = "static-maps-backend.googleapis.com"
}
api_targets {
service = "maps-backend.googleapis.com"
}
api_targets {
service = "places-backend.googleapis.com"
}
browser_key_restrictions {
allowed_referrers = [
"https://${local.project_id}.ey.r.appspot.com/*", # raw url to the app engine service
"*.example.com/*" # Custom DNS name to access to the app
]
}
}
}
The key is created and appears in the console as expected and I can see the API_KEY value.
When I deploy my app, I want it to read the API_KEY string.
My node.js app already reads secrets from secret manager, so I want to add it as a secret.
Another approach could be for the node client library to read the API credential directly, instead of using secret-manager, but I haven't found a way to do that.
I can't work out how to read the key string and store it in the secret.
The terraform resource describes the output
key_string - Output only. An encrypted and signed value held by this
key. This field can be accessed only through the GetKeyString method.
I don't know how to call this method in Terraform, to pass the value to a secret version.
This doesn't work.
v1 = { enabled = true, data = resource.google_apikeys_key.maps-api-key.GetKeyString }
Referencing attributes and arguments does not work the way you tried it. You did quote the correct attribute though, but just failed to specify it:
v1 = {
enabled = true,
data = resource.google_apikeys_key.maps-api-key.key_string
}
Make sure to understand how referencing attributes in Terraform works [1].
[1] https://www.terraform.io/language/expressions/references#references-to-resource-attributes
I am calling the method dlp.deidentify_content in the following code. The KeyRing is made in region us-east1 and the keys are generated using HSM. GCP did not allow to generate a HSM key for a global key ring.
# Import the client library
import google.cloud.dlp
# Instantiate a client
dlp = google.cloud.dlp_v2.DlpServiceClient()
# Convert the project id into a full resource id.
parent = dlp.project_path(project)
# The wrapped key is base64-encoded, but the library expects a binary
# string, so decode it here.
import base64
wrapped_key = base64.b64decode(wrapped_key)
# Construct FPE configuration dictionary
crypto_replace_ffx_fpe_config = {
"crypto_key": {
"kms_wrapped": {
"wrapped_key": wrapped_key,
"crypto_key_name": key_name,
}
},
"common_alphabet": alphabet,
}
# Add surrogate type
if surrogate_type:
crypto_replace_ffx_fpe_config["surrogate_info_type"] = {
"name": surrogate_type
}
# Construct inspect configuration dictionary
inspect_config = {
"info_types": [{"name": info_type} for info_type in info_types]
}
# Construct deidentify configuration dictionary
deidentify_config = {
"info_type_transformations": {
"transformations": [
{
"primitive_transformation": {
"crypto_replace_ffx_fpe_config": crypto_replace_ffx_fpe_config
}
}
]
}
}
# Convert string to item
item = {"value": string}
# Call the API
response = dlp.deidentify_content(
parent,
inspect_config=inspect_config,
deidentify_config=deidentify_config,
item=item,
#location_id="us-east1",
)
# Print results
print(response.item.value)
When I run the code I get the error,
google.api_core.exceptions.NotFound: 404 Received the following error message from Cloud KMS when unwrapping KmsWrappedCryptoKey "projects/PROJ_NAME/locations/us-east1/keyRings/dlp-test3/cryptoKeys/key7": The request concerns location 'us-east1' but was sent to location 'global'. Read go/storky-stubby for more information.
I am unable to figure out how to send the request from a specific region. Ideally I would want to have the key-ring as global. However, GCP does not allow HSM keys for global key rings and as a result cannot have a wrapped_key for that key.
Can someone suggest how to overcome the error?
Cloud HSM keys cannot be created or imported in some locations, such as global, this is only possible with Cloud EKM keys. If you want to use Cloud HSM in an available location such as “us-east1” you can follow these steps for importing keys to a region.
I am using AWS Systems Manager Parameter Store to hold database connection strings which are used to dynamically build a DbContext in my .NET Core Application
I am using the .NET Core AWS configuration provider (from https://aws.amazon.com/blogs/developer/net-core-configuration-provider-for-aws-systems-manager/) which injects my parameters into the IConfiguration at runtime.
At the moment I am having to keep my AWS access key/secret in code so it can be accessed by the ConfigurationBuilder but would like to move this out of the code base and stored it in appsettings or similar.
Here is my method to create the webhost builder called at startup
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
var webHost = WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
AWSCredentials credentials = new BasicAWSCredentials("xxxx", "xxxx");
AWSOptions options = new AWSOptions()
{
Credentials = credentials,
Region = Amazon.RegionEndpoint.USEast2
};
webHost.ConfigureAppConfiguration(config =>
{
config.AddJsonFile("appsettings.json");
config.AddSystemsManager("/ParameterPath", options, reloadAfter: new System.TimeSpan(0, 1, 0)); // Reload every minute
});
return webHost;
}
I need to be able to inject the BasicAWSCredentials parameter from somewhere.
You need to access an already built configuration to be able to retrieve the information you seek.
Consider building one to retrieve the needed credentials
public static IWebHostBuilder CreateWebHostBuilder(string[] args) {
var webHost = WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
var access_key = configuration.GetValue<string>("access_key:path_here");
var secret_key = configuration.GetValue<string>("secret_key:path_here");
AWSCredentials credentials = new BasicAWSCredentials(access_key, secret_key);
AWSOptions options = new AWSOptions() {
Credentials = credentials,
Region = Amazon.RegionEndpoint.USEast2
};
webHost.ConfigureAppConfiguration(config => {
config.AddJsonFile("appsettings.json");
config.AddSystemsManager("/ParameterPath", options, reloadAfter: new System.TimeSpan(0, 1, 0)); // Reload every minute
});
return webHost;
}
I would also suggest reviewing Configuring AWS Credentials from the docs to use the SDK to find a possible alternative way to storing and retrieving the credentials.
Consider example about sync version and old aws sdk:
public void syncIterateObjects() {
AmazonS3 s3Client = null;
String marker = null;
do {
ObjectListing objects = s3Client.listObjects(
new ListObjectsRequest()
.withBucketName("bucket")
.withPrefix("prefix")
.withMarker(marker)
.withDelimiter("/")
.withMaxKeys(100)
);
marker = objects.getNextMarker();
} while (marker != null);
}
Everything is clear - do/while do the work. Consider async example and awsd sdk 2.0
public void asyncIterateObjects() {
S3AsyncClient client = S3AsyncClient.builder().build()
final CompletableFuture<ListObjectsV2Response> response = client.listObjectsV2(ListObjectsV2Request.builder()
.delimiter("/")
.prefix("bucket")
.bucket("prefix")
.build())
.thenApply(Function.identity());
// what to do next ???
}
Ok I got CompletableFuture, but how run cycle to pass marker (nextContinuationToken in aws sdk 2.0) between previous and next Future?
You have only one future, notice the type is a future list of objects.
now you have to decide if you want to get the future or apply further transformations to it before getting it. After you get the future you can use the same method you used before with the while
I am trying to get my grails app working with Amazon S3, I have been following the following docs... http://agorapulse.github.io/grails-aws-sdk/guide/single.html
At the following step amazonWebService.s3.putObject(new PutObjectRequest('some-grails-bucket', 'somePath/someKey.jpg', new File('/Users/ben/Desktop/photo.jpg')).withCannedAcl(CannedAccessControlList.PublicRead))
The project can't resolve class PutObjectRequest, and I have tried importing com.amazonaws.services.s3.model.PutObjectRequest manually, but it still cant find the class. The only thing I can think of is I might have an older version of the SDK, though I only followed the tutorial.
My BuildConfig.groovy...
...
dependencies{
//dependencies for amazon aws plugin
build 'org.apache.httpcomponents:httpcore:4.3.2'
build 'org.apache.httpcomponents:httpclient:4.3.2'
runtime 'org.apache.httpcomponents:httpcore:4.3.2'
runtime 'org.apache.httpcomponents:httpclient:4.3.2'
}
plugins{
...
runtime ':aws-sdk:1.9.40'
}
has anyone else run into this issue and have a solution?
I don't use the plugin, I simply just use the SDK directly. Not sure what you would need a plugin for. You don't need httpcomponents for it to work
Add this to you dependencies block:
compile('com.amazonaws:aws-java-sdk-s3:1.10.2') {
exclude group: 'com.fasterxml.jackson.core'
}
Heres my bean I use. I set the key, access, and bucket data in the bean configuration
class AmazonStorageService implements FileStorageService {
String accessKeyId
String secretAccessKey
String bucketName
AmazonS3Client s3client
#PostConstruct
private void init() {
s3client = new AmazonS3Client(new BasicAWSCredentials(accessKeyId, secretAccessKey));
}
String upload(String name, InputStream inputStream) {
s3client.putObject(new PutObjectRequest(bucketName, name, inputStream, null).withCannedAcl(CannedAccessControlList.PublicRead));
getUrl(name)
}
String upload(String name, byte[] data) {
upload(name, new ByteArrayInputStream(data))
}
String getUrl(String name) {
s3client.getUrl(bucketName, name)
}
Boolean exists(String name) {
try {
s3client.getObjectMetadata(bucketName, name)
true
} catch(AmazonServiceException e) {
false
}
}
}