I'm trying to TransactWrite using PynamoDB.
But, I'm getting botocore.exceptions.NoCredentialsError: Unable to locate credentials error.
I am sending a request to DynamoDB from local PC. And, since MFA is applied to the locally stored AWS Profile, I got and applied the credentials through STS.
It seems that the credentials entered in the meta information of the table is not being applied well. Has anyone had this kind of experience?
Below is the code that tries TransactWrite and the code that defines the table.
from pynamodb.connection import Connection
connection = Connection(region="ap-northeast-2")
with TransactWrite(connection=connection) as transaction:
transaction.update(
item,
actions=[
Table.one.set(1),
Table.type.set("test)
]
)
transaction.save(
Table("1234", "test"),
)
class Table(Model):
class Meta:
aws_access_key_id = credentials["AccessKeyId"]
aws_secret_access_key = credentials["SecretAccessKey"]
aws_session_token = credentials["SessionToken"]
table_name = "test-dev-table"
region = "ap-northeast-2"
id = UnicodeAttribute(hash_key=True)
ts = NumberAttribute(range_key=True)
one = NumberAttribute(null=True)
type = UnicodeAttribute(null=True)
Thank you for your help.
Related
I'm attempting to create a dataproc cluster using the https://github.com/googleapis/java-dataproc library, following the example here: https://github.com/googleapis/java-dataproc/blob/main/samples/snippets/src/main/java/CreateCluster.java
My (translated to scala) code:
import com.google.cloud.dataproc.v1._
object CreateCluster extends App {
val projectId = "my-project-id"
val region = "europe-west1"
val clusterName = "test-cluster"
val regionEndpoint = s"$region-dataproc.googleapis.com:443"
val clusterControllerSettings = ClusterControllerSettings.newBuilder()
.setEndpoint(regionEndpoint)
.build()
val clusterControllerClient = ClusterControllerClient.create(clusterControllerSettings)
val masterConfig = InstanceGroupConfig.newBuilder.setMachineTypeUri("n1-standard-2").setNumInstances(1).build
val workerConfig = InstanceGroupConfig.newBuilder.setMachineTypeUri("n1-standard-2").setNumInstances(2).build
val clusterConfig = ClusterConfig.newBuilder.setMasterConfig(masterConfig).setWorkerConfig(workerConfig).build
val cluster = Cluster.newBuilder().setClusterName(clusterName).setConfig(clusterConfig).build()
val createClusterAsyncRequest = clusterControllerClient.createClusterAsync(projectId, region, cluster)
val createResponse: Cluster = createClusterAsyncRequest.get()
println(s"Created cluster: ${createResponse.getClusterName}")
clusterControllerClient.close()
}
I'm getting io.grpc.StatusRuntimeException: PERMISSION_DENIED: Required 'compute.regions.get' permission for 'projects/my-project/regions/europe-west1'.
I'm unclear as to exactly what is meant here: https://github.com/googleapis/java-dataproc#authorization. I'm trying to get this to work from my desktop so what I've done is run gcloud auth application-default login --scopes https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/compute.readonly.
I'm certain my 'normal' user has the necessary permissions as I've executed a 'regions.get' for my project/region from this page: https://cloud.google.com/compute/docs/reference/rest/v1/regions/get, and can create dataproc clusters not using the java library without issue.
I'm clearly missing something, probably something obvious, but am stuck so any help will be greatly appreciated!
Edit 1:
gcloud auth application-default login without specifying --scopes results in the same permission error
Edit 2:
I'm still none the wiser as to why I'm getting the compute.regions.get permission missing error.
I've written some more code which appears to show I do have the necessary permission when using getApplicationDefault:
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport
import com.google.api.client.json.gson.GsonFactory
import com.google.api.services.compute.Compute
import com.google.auth.http.HttpCredentialsAdapter
import com.google.auth.oauth2.GoogleCredentials.getApplicationDefault
object GetRegions extends App {
val project = "my-project-id"
val region = "europe-west1"
val httpTransport = GoogleNetHttpTransport.newTrustedTransport
val jsonFactory = GsonFactory.getDefaultInstance
val httpCredentials = new HttpCredentialsAdapter(getApplicationDefault)
val computeService = new Compute.Builder(httpTransport, jsonFactory, httpCredentials).build
val request = computeService.regions.get(project, region)
val response = request.execute
System.out.println(response) // This successfully prints out details
}
Turned out it was the permissions of the dataproc service agent that were missing this permission, not my user (these had been modified from default permissions).
See https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#service_agent_control_plane_identity.
Not explicitly setting the zone when creating a cluster seems to mean that this service account requires the compute.regions.get permission. Explicitly setting a zone meant it didn't.
I have the following script to list trails from CloudTrail:
import boto3
import os
os.environ['AWS_DEFAULT_REGION'] = 'us-east-2'
current_session = boto3.session.Session(profile_name='production')
client = current_session.client('cloudtrail')
response = client.list_trails()
print(response)
This only gives me the list in us-east-1.
I have tried setting the variable by passing it as an argument to the session and also setting it as env var on command line but it only looks at us-east-1.
Any suggestions?
I suspect your profile does not have a region associated to it. For this reason, the session instantiation is using us-east-1 as a default.
To fix this, explicitly specify the region name in the session instantiation:
current_session = boto3.session.Session(profile_name='production', region_name='us-east-2')
Define it in session config which has the following:
aws_access_key_id - A specific AWS access key ID.
aws_secret_access_key - A specific AWS secret access key.
region_name
The AWS Region where you want to create new connections. profile_name - The profile to use when creating your session.
So just add region_name in your example.
See:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/session.html#session-configurations
I am trying to retrieve from aws secret manager key value pairs and pass them to my azure SQL Server. Regarding aws secret manager I am using this module.
module "secrets-manager" {
source = "lgallard/secrets-manager/aws"
version = "0.4.1"
secrets = [
{
name = "secretKeyValue"
description = "Secret Key value pair"
secret_key_value = {
username = "username"
password = "password"
}
}
]
}
Then I created a azurerm SQL Server and would like to pass the username and password. What I tried is the following code.
resource "azurerm_sql_server" "sql-server-testing" {
administrator_login = module.secrets-manager.secret_ids[0]
administrator_login_password = module.secrets-manager.secret_ids[0]
location = "westeurope"
name = "sql-name"
resource_group_name = azurerm_resource_group.azure-secrets.name
version = "12.0"
}
I am able to access the secret manager, but it hit only the amazon arn resource and I can't find a way how to pass the secret username and password to my SQL Server.
Thank you very much for any help you can provide
1- Retrieve metadata information about a Secrets Manager secret, via aws secrets manager data resource
data "aws_secretsmanager_secret" "secrets" {
arn = module.secrets-manager.secret_ids[0]
}
data "aws_secretsmanager_secret_version" "current" {
secret_id = data.aws_secretsmanager_secret.secrets.id
}
2- Retrieve a specific value inside that secret (in sql code section)
administrator_login = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["username"]
administrator_login_password = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["password"]
I'm new to stack overflow. Apologize if I didn't format it right.
I'm currently using terraform to provision aurora-rds. Problem is, I shouldn't be having the db master-password as a plaintext sitting in the .tf file.
I've been using this config initially with a plaintext password.
engine = "aurora-mysql"
engine_version = "5.7.12"
cluster_family = "aurora-mysql5.7"
cluster_size = "1"
namespace = "eg"
stage = "dev"
admin_user = "admin"
admin_password = "passwordhere"
db_name = "dbname"
db_port = "3306
I'm looking for a solution where I can skip a plaintext password like shown above and have something auto-generated and able to be included into terraform file. Also, I must be able to retrieve the password so that I can use that to configure wordpress server.
https://gist.github.com/smiller171/6be734957e30c5d4e4b15422634f13f4
I came across this solution but, not sure how to retrieve the password to use it in server. Well I haven't deployed this yet too.
As you mentioned in your question, there is a workaround, which you haven't yet tried.
I suggest to try that first and if its successful then to retrieve the password use output terraform resource.
output "db_password" {
value = ${random_string.db_master_pass.result}
description = "db password"
}
Once your terraform run is completed you can retrieve that value using terraform output db_password or if you want to refer that password somewhere in the terraform code itself then right away refer to that variable ${db_password}
Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.