How to add the content of script using terraform - amazon-web-services

I'm trying to use aws_sagemaker_studio_lifecycle_config using the aws_sagemaker_studio_lifecycle_config
as per the document
studio_lifecycle_config_content - (Required) The ****content**** of your Studio Lifecycle Configuration script. This content must be base64 encoded.
now my question is how to add the content of the script from a location?
The script is in bash.

So instead base64encode use filebase64 ....

Related

How to copy environment variables from one AWS lambda to another in Visual Studio

I want to mimic environment variables that already exist in an existing AWS lambda into a new AWS lambda I am creating in Visual Studio IDE.
I did not find a way to export / import environment variables so was wondering how to minimize manual effort (of manually creating each environment variable again)
As I did not find any export/import option - neither in AWS portal nor via AWS explorer that shows in Visual studio, I resorted to Notepad++ copy/paste/regex-ing so that I could populate the aws-lambda-tools-defaults.json that can be configured in Visual Studio (which will publish the lambda with the environment variables as set in that .json file)
Steps
From AWS website, select & copy the environment variables into a new notepad++ file
Then perform following find-&-replace in that notepad++ file
S.No.
Search mode
Find what
Replace with
2.1
Regex
[ \t]+
(nothing)
2.2
Regex
^
\\"
2.3
Regex
$
\\";
Select all contents and join lines (ctrl+j shortcut for notepad++)
Remove whitespace between semicolons introduced by previous step (replace ; with ;)
The single line output that gets produced can now be pasted at the placeholder in the aws json file:
"environment-variables" : "<place the outcome here>",
(note: the double quotes surrounding the outcome text should remain there)
Now when you save the aws json file and publish this lambda from Visual Studio IDE, you will see the environment variables prefilled

Is there a way to copy Google Cloud Storage object from SDK Shell to network drive like Box?

Is there a way to copy a GCS object via SDK Shell to a network drive like Box?
What i've tried is below. Thanks.
gsutil cp gs://your-bucket/some_file.tif C:/Users/Box/01. name/folder
CommandException: Destination URL must name a directory, bucket, or bucket
subdirectory for the multiple source form of the cp command.
There appears to be a typo in your destination:
C:/Users/Box/01. name/folder
There is a space after the period and before 'name' - you'll need to either wrap it in quotes or escape that space. Looks like you're on Windows; here's a doc on how to escape spaces in file paths.

AWS Athena/Glue: Add partition to the data catalog using AWS CLI

I want to add manually a partiton to the datacalaog.
I can do it using Athena editor but i want to make it using a shell script that i can parameter and schedule.
Here's an example of my sql command to add a partition:
ALTER TABLE tab ADD PARTITION (year='2020', month='10', day='28', hour='23') LOCATION 's3://bucket/data/2020/10/28/23/'
i want to do the same thing with a shell command.
I think that i can use Glue APi: create-partition
Doc: https://docs.aws.amazon.com/cli/latest/reference/glue/create-partition.html
I'm trying but there something wrong with the format of the parameter that i add.

How can I provision IIS on EC2 Windows with a resource?

I have just started working on a project that is hosted on an AWS EC2 Windows Instance with an IIS. I want to move this setup to more reliable place, and one of the first things I wanted to do was to move away from snowflake servers that are setup and configured by hand.
So started looking at Terraform from Hashicorp. My thought was that I could define the entire setup including network etc in Terraform and that way make sure it was configured correctly.
I thought I would start with defining a server. A simple Windows Server instance with an IIS installed. But this is where I run into my first problems. I thought I could configure the IIS from Terraform. I guess you can't. So my next thought was to combine Terraform with Powershell Desired State Configuration.
I can setup an IIS server on a box using DSC. But I am stuck invoking DSC from Terraform. I can provision a vanilla server easily. I have tried looking for a good blog post on how to use DSC in combination with Terraform, but I can't find one that explains how to do it.
Can anyone point me towards a good place to read up on this? Or alternatively if the reason I can't find this is that it is just bad practice and I should do it in another way, then please educate me.
Thanks
How can I provision IIS on EC2 Windows with a resource?
You can run arbitrary PowerShell scripts on startup as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = <<-EOF
<powershell>
$file = $env:SystemRoot + "\Temp\${var.some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
EOF
//...
}
You'll need a variable like this defined to use that (I'm providing a more complex example so there's a more useful starting point)
variable "some_variable" {
type = string
default = "UserDataTestFile"
}
Instead of creating a timestamp file like the example above, you can invoke DSC to set up IIS as you normally would interactively from PowerShell on a server.
You can read more about user_data on Windows here:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html
user_data will include your PowerShell directly.
You can use a templatefile("${module.path}/user-data.ps1, {some_variable = var.some_variable}) instead of an inline script as above.
Have user-data.ps1 in the same directory as the TF file that references it:
<powershell>
$file = $env:SystemRoot + "\Temp\${some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
You still need the <powershell></powershell> tags around your script source code. That's a requirement of how Windows on EC2 expects PowerShell user-data scripts.
And then update your TF file as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = templatefile("${module.path}/user-data.ps1, {
some_variable = var.some_variable
})
//...
}
Note that in the file read by templatefile has variables like some_variable and NOT var.some_variable.
Read more about templatefile here:
https://www.terraform.io/docs/configuration/functions/templatefile.html

Custom endpoint in AWS powershell

I am trying to use AWS Powershell with Eucalyptus.
I can do this with AWS CLI with parameter --endpoint-url.
Is it possible to set endpoint url in AWS powershell?
Can I create custom region with my own endpoint URL in AWS Powershell?
--UPDATE--
The newer versions of the AWS Tools for Windows PowerShell (I'm running 3.1.66.0 according to Get-AWSPowerShellVersion), has an optional -EndpointUrl parameter for the relevant commands.
Example:
Get-EC2Instance -EndpointUrl https://somehostnamehere
Additionally, the aforementioned bug has been fixed.
Good stuff!
--ORIGINAL ANSWER--
TL;TR
Download the default endpoint config file from here: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json
Customize it. Example:
{
"version": 2,
"endpoints": {
"*/*": {
"endpoint": "your_endpoint_here"
}
}
}
After importing the AWSPowerShell module, tell the SDK to use your customized endpoint config. Example:
[Amazon.AWSConfigs]::EndpointDefinition = "path to your customized Amazon.endpoints.json here"
Note: there is a bug in the underlying SDK that causes endpoints that have a path component from being signed correctly. The bug affects this solution and the solution #HyperAnthony proposed.
Additional Info
Reading through the .NET SDK docs, I stumbled across a section that revealed that one can global set the region rules given a file: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-other.html#config-setting-awsendpointdefinition
Unfortunately, I couldn't find anywhere where the format of such a file is documented.
I then splunked through the AWSSDK.Core.dll code and found where the SDK loads the file (see LoadEndpointDefinitions() method at https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/RegionEndpoint.cs).
Reading through the code, if a file isn't explicitly specified on AWSConfigs.EndpointDefinition, it ultimately loads the file from an embedded resource (i.e. https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json)
I don't believe that it is. This list of common parameters (that can be used with all AWS PowerShell cmdlets) does not include a Service URL, it seems instead to opt for a simple string Region to set the Service URL based on a set of known regions.
This AWS .NET Development forum post suggests that you can set the Service URL on a .NET SDK config object, if you're interested in a possible alternative in PowerShell. Here's an example usage from that thread:
$config=New-Object Amazon.EC2.AmazonEC2Config
$config.ServiceURL = "https://ec2.us-west-1.amazonaws.com"
$client=[Amazon.AWSClientFactory]::CreateAmazonEC2Client($accessKeyID,$secretKeyID,$config)
It looks like you can use it with most config objects when setting up a client. Here's some examples that have the ServiceURL property. I would imagine that this is on most all AWS config objects:
AmazonEC2Config
AmazonS3Config
AmazonRDSConfig
Older versions of the documentation (for v1) noted that this property will be ignored if the RegionEndpoint is set. I'm not sure if this is still the case with v2.