Im a novice to google cloud compute api in node
im using this library
https://googleapis.dev/nodejs/compute/latest/index.html
im authenticated and can make API requests that is all set up
all im trying to do is make a start up script that will download from this URL
http://eve-robotics.com/release/EveAIO_setup.exe and places the folder on the desktop
i have this but im 100% sure this is way off based on some articles and docs i am seeing but i know nothing ab bash, start up scripts
this is what i have
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const zone = compute.zone('us-central1-c')
async function createVM(){
vmName = 'start-script-trial3'
// const [vm, operation] = await zone.createVM(vmName, {
// })
const config = {
os: 'windows',
http: true,
metadata: {
items: [
{
key: 'startup-script',
value: `curl http://eve-robotics.com/release/EveAIO_setup.exe --output Eve`,
},
]}
}
const vm = zone.vm(vmName)
const [gas, operation] = await vm.create(config)
console.log(operation.id)
}
createVM()
I was able to do it in bash:
I made a 'bat' script for windows:
#ECHO OFF
curl http://eve-robotics.com/release/EveAIO_setup.exe --output C:\Users\Eve
I copied the script to GCS:
gsutil cp file.bat gs://my-bucket/
Then I run the gcloud command:
gcloud compute instances create example-windows-instance --scopes storage-ro --image-family=windows-1803-core --image-project=windows-cloud --metadata windows-startup-script-url=gs://marian-b/file.bat --zone=europe-west1-c
Related
Why don't I create Amazon lightsailclient and set up UserData?
var shuju = new CreateInstancesRequest()
{
BlueprintId = "centos_7_1901_01",
BundleId = "micro_2_0",
AvailabilityZone = "ap-northeast-1d",
InstanceNames = new System.Collections.Generic.List<string>() { "test" },
UserData = "echo root:test123456- |sudo chpasswd root\r\nsudo sed -i 's/^#\\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;\r\nsudo sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;\r\nsudo reboot\r\n"
};
If you wish to run a User Data script on a Linux instance, then the first line must begin with #!.
It uses the same technique as an Amazon EC2 instance, so see: Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud
I have a Powershell Lambda that I would like to deploy via the AWS CDK however I'm having issues getting it to run.
Deploying the Powershell via a manual Publish-AWSPowerShellLambda works:
Publish-AWSPowerShellLambda -ScriptPath .\PowershellLambda.ps1 -Name PowershellLambda
However the same script deployed with the CDK doesnt log to CloudWatch Logs, even though it has permission:
import events = require('#aws-cdk/aws-events');
import targets = require('#aws-cdk/aws-events-targets');
import lambda = require('#aws-cdk/aws-lambda');
import cdk = require('#aws-cdk/core');
export class LambdaCronStack extends cdk.Stack {
constructor(app: cdk.App, id: string) {
super(app, id);
const lambdaFn = new lambda.Function(this, 'Singleton', {
code: new lambda.AssetCode('./PowershellLambda/PowershellLambda.zip'),
handler: 'PowershellLambda::PowershellLambda.Bootstrap::ExecuteFunction',
timeout: cdk.Duration.seconds(300),
runtime: lambda.Runtime.DOTNET_CORE_2_1
});
const rule = new events.Rule(this, 'Rule', {
schedule: events.Schedule.expression('rate(1 minute)')
});
rule.addTarget(new targets.LambdaFunction(lambdaFn));
}
}
const app = new cdk.App();
new LambdaCronStack(app, 'LambdaCronExample');
app.synth();
The powershell script currently contains just the following lines and works when deployed by Publish-AWSPowerShellLambda on the CLI:
#Requires -Modules #{ModuleName='AWSPowerShell.NetCore';ModuleVersion='3.3.335.0'}
Write-Host "Powershell Lambda Executed"
Note: For the CDK Deployment I generate the .zip file using a build step in package.json:
"scripts": {
"build": "tsc",
"build-package": "pwsh -NoProfile -ExecutionPolicy Unrestricted -command New-AWSPowerShellLambdaPackage -ScriptPath './PowershellLambda/PowershellLambda.ps1' -OutputPackage ./PowershellLambda/PowershellLambda.zip",
"watch": "tsc -w",
"cdk": "cdk"
}
The CDK deploys fine and the Lambda runs as expected but the only thing in Cloudwatch Logs is this:
START RequestId: 4c12fe1a-a9e0-4137-90cf-747b6aecb639 Version: $LATEST
I've checked that the Handler in the CDK script matches the output of the Publish-AWSPowerShellLambda and that the zip file uploaded fine and contains the correct code.
Any suggestions as to why this isnt working?
Setting the memory size to 512mb within the lambda.Function has resolved the issue.
The cloudwatch entry showed the lambda starting but it appears there wasn't enough memory to initialize and run the .net runtime.
I have machine with nixos (provisioned using terraform, config), I want to connect to it using deployment.targetHost = ipAddress and deployment.targetEnv = "none"
But I can't configure nixops to use /secrets/stage_ssh_key ssh key
This is not working ( actually this is not documented, I have found it here https://github.com/NixOS/nixops/blob/d4e5b779def1fc9e7cf124930d0148e6bd670051/nixops/backends/none.py#L33-L35 )
{
stage =
{ pkgs, ... }:
{
deployment.targetHost = (import ./nixos-generated/stage.nix).terraform.ip;
deployment.targetEnv = "none";
deployment.none.sshPrivateKey = builtins.readFile ./secrets/stage_ssh_key;
deployment.none.sshPublicKey = builtins.readFile ./secrets/stage_ssh_key.pub;
deployment.none.sshPublicKeyDeployed = true;
environment.systemPackages = with pkgs; [
file
];
};
}
nixops ssh stage results in asking for password, expected - login without password
nixops ssh stage -i ./secrets/stage_ssh_key works as expected, password is not asked
How to reproduce:
download repo
rm -rf secrets/*
add aws keys in secrets/aws.nix
{
EC2_ACCESS_KEY="XXXX";
EC2_SECRET_KEY="XXXX";
}
nix-shell
make generate_stage_ssh_key
terraform apply
make nixops_create
nixops deploy asks password
I've created a RDS postgres instance with size of 65GB initially.
Is it possible to get free space available using any postgres query.
If not, how can I achieve the same?
Thank you in advance.
A couple ways to do it
Using the AWS Console
Go to the RDS console and select the region your database is in. Click on the Show Monitoring button and pick your database instance. There will be a graph (like below image) that shows Free Storage Space.
This is documented over at AWS RDS documentation.
Using the API via AWS CLI
Alternatively, you can use the AWS API to get the information from cloudwatch.
I will show how to do this with the AWS CLI.
This assumes you have set up the AWS CLI credentials. I export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in my environment variables, but there are multiple ways to configure the CLI (or SDKS).
REGION="eu-west-1"
START="$(date -u -d '5 minutes ago' '+%Y-%m-%dT%T')"
END="$(date -u '+%Y-%m-%dT%T')"
INSTANCE_NAME="tstirldbopgs001"
AWS_DEFAULT_REGION="$REGION" aws cloudwatch get-metric-statistics \
--namespace AWS/RDS --metric-name FreeStorageSpace \
--start-time $START --end-time $END --period 300 \
--statistics Average \
--dimensions "Name=DBInstanceIdentifier, Value=${INSTANCE_NAME}"
{
"Label": "FreeStorageSpace",
"Datapoints": [
{
"Timestamp": "2017-11-16T14:01:00Z",
"Average": 95406264320.0,
"Unit": "Bytes"
}
]
}
Using the API via Java SDK
Here's a rudimentary example of how to get the same data via the Java AWS SDK, using the Cloudwatch API.
build.gradle contents
apply plugin: 'java'
apply plugin: 'application'
sourceCompatibility = 1.8
repositories {
jcenter()
}
dependencies {
compile 'com.amazonaws:aws-java-sdk-cloudwatch:1.11.232'
}
mainClassName = 'GetRDSInfo'
Java class
Again, I rely on the credential chain to get AWS API credentials (I set them in my environment). You can change the call to the builder to change this behavior (see Working with AWS Credentials documentation).
import java.util.Calendar;
import java.util.Date;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
import com.amazonaws.services.cloudwatch.model.GetMetricStatisticsRequest;
import com.amazonaws.services.cloudwatch.model.GetMetricStatisticsResult;
import com.amazonaws.services.cloudwatch.model.StandardUnit;
import com.amazonaws.services.cloudwatch.model.Dimension;
import com.amazonaws.services.cloudwatch.model.Datapoint;
public class GetRDSInfo {
public static void main(String[] args) {
final long GIGABYTE = 1024L * 1024L * 1024L;
// calculate our endTime as now and startTime as 5 minutes ago.
Calendar cal = Calendar.getInstance();
Date endTime = cal.getTime();
cal.add(Calendar.MINUTE, -5);
Date startTime = cal.getTime();
String dbIdentifier = "tstirldbopgs001";
Regions region = Regions.EU_WEST_1;
Dimension dim = new Dimension()
.withName("DBInstanceIdentifier")
.withValue(dbIdentifier);
final AmazonCloudWatch cw = AmazonCloudWatchClientBuilder.standard()
.withRegion(region)
.build();
GetMetricStatisticsRequest req = new GetMetricStatisticsRequest()
.withNamespace("AWS/RDS")
.withMetricName("FreeStorageSpace")
.withStatistics("Average")
.withStartTime(startTime)
.withEndTime(endTime)
.withDimensions(dim)
.withPeriod(300);
GetMetricStatisticsResult res = cw.getMetricStatistics(req);
for (Datapoint dp : res.getDatapoints()) {
// We requested only the average free space over the last 5 minutes
// so we only have one datapoint
double freespaceGigs = dp.getAverage() / GIGABYTE;
System.out.println(String.format("Free Space: %.2f GB", freespaceGigs));
}
}
}
Example Java Code Execution
> gradle run
> Task :run
Free Space: 88.85 GB
BUILD SUCCESSFUL in 7s
The method using the AWS Management Console has changed.
Now you have to go:
RDS > Databases > [your_db_instance]
From there, scroll down, and click on "Monitoring"
There you should be able to see your db's "Free Storage Space" (in MB/Second)
My problem
I have successfully deployed a nomad job with a few dozen Redis Docker containers on AWS, using the default Redis image from Dockerhub.
I've slightly altered the default config file created by nomad init to change the number of running containers, and everything works as expected
The problem is that the actual image I would like to run is in ECR, which requires AWS permissions (access and secret key), and I don't know how to send these.
Code
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "cache" {
count = 30
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "docker"
config {
# My problem here
image = "https://-whatever-.dkr.ecr.us-east-1.amazonaws.com/-whatever-"
port_map {
db = 6379
}
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
What have I tried
Extensive Google Search
Reading the manual
Placing the aws credentials in the machine which runs the nomad file (using aws configure)
My question
How can nomad be configured to pull Docker containers from AWS ECR using the AWS credentials?
Pretty late for you, but aws ecr does not handle authentication in the way that docker expects. There you need to run sudo $(aws ecr get-login --no-include-email --region ${your region}) Running the returned command actually authenticates in a docker compliant way
Note that region is optional if aws cli is configured. Personally, I allocate an IAM role the box (allowing ecr pull/list/etc), so that I don't have to manually deal with credentials.
I don't use ECR, but if it acts like a normal docker registry, this is what I do for my registry, and it works. Assuming the previous sentence, it should work fine for you as well:
config {
image = "registry.service.consul:5000/MYDOCKERIMAGENAME:latest"
auth {
username = "MYMAGICUSER"
password = "MYMAGICPASSWORD"
}
}