I am trying to use AWS Powershell with Eucalyptus.
I can do this with AWS CLI with parameter --endpoint-url.
Is it possible to set endpoint url in AWS powershell?
Can I create custom region with my own endpoint URL in AWS Powershell?
--UPDATE--
The newer versions of the AWS Tools for Windows PowerShell (I'm running 3.1.66.0 according to Get-AWSPowerShellVersion), has an optional -EndpointUrl parameter for the relevant commands.
Example:
Get-EC2Instance -EndpointUrl https://somehostnamehere
Additionally, the aforementioned bug has been fixed.
Good stuff!
--ORIGINAL ANSWER--
TL;TR
Download the default endpoint config file from here: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json
Customize it. Example:
{
"version": 2,
"endpoints": {
"*/*": {
"endpoint": "your_endpoint_here"
}
}
}
After importing the AWSPowerShell module, tell the SDK to use your customized endpoint config. Example:
[Amazon.AWSConfigs]::EndpointDefinition = "path to your customized Amazon.endpoints.json here"
Note: there is a bug in the underlying SDK that causes endpoints that have a path component from being signed correctly. The bug affects this solution and the solution #HyperAnthony proposed.
Additional Info
Reading through the .NET SDK docs, I stumbled across a section that revealed that one can global set the region rules given a file: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-other.html#config-setting-awsendpointdefinition
Unfortunately, I couldn't find anywhere where the format of such a file is documented.
I then splunked through the AWSSDK.Core.dll code and found where the SDK loads the file (see LoadEndpointDefinitions() method at https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/RegionEndpoint.cs).
Reading through the code, if a file isn't explicitly specified on AWSConfigs.EndpointDefinition, it ultimately loads the file from an embedded resource (i.e. https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json)
I don't believe that it is. This list of common parameters (that can be used with all AWS PowerShell cmdlets) does not include a Service URL, it seems instead to opt for a simple string Region to set the Service URL based on a set of known regions.
This AWS .NET Development forum post suggests that you can set the Service URL on a .NET SDK config object, if you're interested in a possible alternative in PowerShell. Here's an example usage from that thread:
$config=New-Object Amazon.EC2.AmazonEC2Config
$config.ServiceURL = "https://ec2.us-west-1.amazonaws.com"
$client=[Amazon.AWSClientFactory]::CreateAmazonEC2Client($accessKeyID,$secretKeyID,$config)
It looks like you can use it with most config objects when setting up a client. Here's some examples that have the ServiceURL property. I would imagine that this is on most all AWS config objects:
AmazonEC2Config
AmazonS3Config
AmazonRDSConfig
Older versions of the documentation (for v1) noted that this property will be ignored if the RegionEndpoint is set. I'm not sure if this is still the case with v2.
Related
This seems straightforward to do that passing the Service Account key file (generated from the GCP console) by specifying the file location in the application.properties file. However, I tried all the following options:
1. spring.cloud.gcp.credentials.location=file:/home/my_user_id/mp6key.json
2. spring.cloud.gcp.credentials.location=file:src/main/resources/mp6key.json
3. spring.cloud.gcp.credentials.location=file:./main/resources/mp6key.json
4. spring.cloud.gcp.credentials.location=file:/src/main/resources/mp6key.json
It all ended up with the same error:
java.io.FileNotFoundException: /home/my_user_id/mp6key.json (No such file or directory)
Could anyone advise where I should put the key file and then how should I specify the path to the file properly?
The same programs run successfully in Ecplise with messages published and subscribed using the Pub/Sub processing from GCP (using the Project Id/Service Account key generated in GCP), but now stuck with the above issue after deployed to run on GCP.
As mentioned in the official documentation, the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc.
for example, if the service account key file is stored in the classpath as src/main/resources/key.json, pass the following property
spring.cloud.gcp.credentials.location=classpath:key.json
if the key file is stored somewhere else in your local file system, use the file prefix in the property value
spring.cloud.gcp.credentials.location=file:<path to key file>
My line looks like this:
spring.cloud.gcp.credentials.location=file:src/main/resources/[my_json_file]
And this works.
The following also works if I put it in the root of the project directory:
spring.cloud.gcp.credentials.location=file:./[my_json_file]
Have you tried to follow this quickstart? Please, try to follow it thoughtfully and explain if you get any error finishing the quickstart.
Anyway, before running your Java script, try running on the console the following (please modify with the exact path where you store your key):
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/mp6key.json"
How are you authenticating your credentials in your Java script?
My answer is easy: if you run you code on GCP, you don't have to use service account key file. Problem eliminated, problem solved!
More seriously, have a look on service identity. I don't know what is your current service (Compute? Function? Cloud Run?). Anyway, you can attach any service account on GCP components. Then, when you code, simply use the default credential. Automatically the component identity is loaded. No key to manage, no key to store securely, no key to rotate!
If you provide more detail on your target platform, I could provide your some guidance to achieve this.
Keep in mind that the service account key file are designed to be used by automatic apps (w/o user account involved) hosted outside GCP (on prem, other Cloud Provider, a CI/CD, Apigee,...)
UPDATE
When you use your personal account, you can also use the default credential.
Install gcloud SDK on your computer
Use the command gcloud auth application-default login
Follow the instructions
Enjoy!
If it doesn't work, get the <path> displayed after the login command and set this value in the environment variable named GOOGLE_APPLICATION_CREDENTIALS.
If you definitively want to use service account key file (which are a security issue for the previous reason, but...), you can use it locally
Either set the json key file path into the GOOGLE_APPLICATION_CREDENTIALS environment variable
Or run this command gcloud auth activate-service-account --key-file=<path to your json key file>
Provided your file is in the resources folder try
file://mp6key.json
using file:// instead of file:/ works for me at least
I'm working on automating access to some APIs thru a visual interface and thus would like to present the user with a user-friendly interface to call Amazon AWS APIs.
However the documentation uses human-readable formats but then the API need be called using more compact tokens.
I'd like to have a list of all the services, ideally:
ServiceID, Service name, Action Firendly Name, Action/Operation name, command line name
e.g. looking into CloudFront ListDistributions operation we can see that:
Service is called "CloudFront" but the API endpoint is spelled lowercase "cloudfront"
API requires calling GET /< version>/distribution (see https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListDistributions.html )
Commandline requires using the "list-distributions" form: https://docs.aws.amazon.com/cli/latest/reference/cloudfront/list-distributions.html
similar thing with "ListPublicKeys" https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListPublicKeys.html
Thus a table like this would help:
ServiceID, Service name, Action Firendly Name, Action/Operation name, command line name
cloudfront, CloudFront, ListDistributions, distribution, list-distributions
cloudfront, CloudFront, ListPublicKeys, public-key, list-public-keys
The link posted from #John Rotenstein in the comment resolves the issue.
The data files from Boto core contain enought information to build the above mentioned table.
I am using AmazonS3ClientBuilder.defaultClient() to build AmazonS3Client client in a Java application.
When I run this, it gives following error:
Caused by: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I want to supply the region via java property file. I referred to many posts but didn't get what I am looking for.
My question is: What is the variable name which I used to specify the region in a Java property file?
I am not looking for environment variable or credential file. Can anyone help me in this.
Try creating your client using builder pattern:-
AmazonS3ClientBuilder.standard().withRegion("YOUR_REGION_STRING_HERE").build();
When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.
I have setup my pc with python and connections to AWS. This has been successfully tested using the s3_sample.py file, I had to create an IAM user account with the credentials in a file which worked fine for S3 buckets.
My next task was to create an mqtt bridge and put some data in a stream in kinesis using the awslab - awslabs/mqtt-kinesis-bridge.
This seems to be all ok except I get an error when I run the bridge.py. The error is:
Could not find ACTIVE stream:my_first_stream error:Stream my_first_stream under account 673480824415 not found.
Strangely this is not the account I use in the .boto file that is suggested to be set up for this bridge, which are the same credentials I used for the S3 bucket
[Credentials]
aws_access_key_id = AA1122BB
aws_secret_access_key = LlcKb61LTglis
It would seem to me that the bridge.py has a hardcoded account but I can not see it and i can't see where it is pointing to the .boto file for credentials.
Thanks in Advance
So the issue of not finding the Active stream for the account is resolved by:
ensure you are hooked into the US-EAST-1 data centre as this is the default data centre for bridge.py
create your stream, you will only need 1 shard
The next problem stems from the specific version of MQTT and the python library paho-mqtt I installed. The bridge application was written with the API of MQTT 1.2.1 using paho-mqtt 0.4.91 in mind.
The new version which is available for download on their website has a different way of interacting with the paho-mqtt library which passes an additional "flags" object to the on_connect callback. This generates the error I was experiencing, since its not expecting the 5th argument.
You should be able to fix it by making the following change to bridge.py
Line 104 currently looks like this:
def on_connect(self, mqttc, userdata, msg):
Simply add flags, after userdata, so that the callback function looks like this:
def on_connect(self, mqttc, userdata,flags, msg):
This should resolve the issue of the final error of the incorrect number of arguments being passed.
Hope this helps others, thank for the efforts.
When you call python SDK for aws service, there is a line to import the boto modules for aws services in bridge.py.
import boto
The setting is pointing to the .boto for credentials and defined defaultly in boto.
Here is the explanation Boto Config :
Details
A boto config file is a text file formatted like an .ini configuration file that specifies values for options that control the behavior of the boto library. In Unix/Linux systems, on startup, the boto library looks for configuration files in the following locations and in the following order:
/etc/boto.cfg - for site-wide settings that all users on this machine will use
~/.boto - for user-specific settings
~/.aws/credentials - for credentials shared between SDKs
Of course, you can set the environment directly,
export AWS_ACCESS_KEY_ID="Your AWS Access Key ID"
export AWS_SECRET_ACCESS_KEY="Your AWS Secret Access Key"