Getting files from server on AWS Using Jenkins Build - amazon-web-services

I have installed jenkins on my local machine (on premises). I have my server (Linux) in AWS Cloud. I need to share logs with developers with out giving server access to them. I need to create a jenkins job by running that job they should get the logs from server.
How can i do that ?? If any one following the same process to get the data from cloud please help me in solving this... Thanks in advance.

Use the SSH Agent plugin to securely setup your private key
Use SCP to copy the log files to the local workspace
Archive those files to the Jenkins job
You could write a pipeline script to do this. Something like:
node ("linux") {
sshagent (credentials: ['deploy-dev']) {
sh 'scp user#awshostnamehere:/somepath/somelogfile .'
archive somelogfile
}
}
Note that this requires you to fill in the blanks. To get this to work you would have to:
Setup an SSH private key credential named deploy-dev
Setup a build agent with the label 'linux' or change that to a label of an agent you do have.

Related

Cloudwatch agent not using environment variable credentials on Windows

I'm trying to configure an AMI using a script that installs the unified Cloudwatch agent on both AWS and on premise Windows machines by using static IAM credentials for both of them. As part of the script, I set the credentials statically (as a test) using
$Env:AWS_ACCESS_KEY_ID="myaccesskey"
$Env:AWS_SECRET_ACCESS_KEY="mysecretkey"
$Env:AWS_DEFAULT_REGION="us-east-1"
Once I have the AMI, I create a machine and connect to it, and then verify the credentials are there by running aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************C6IF env
secret_key ****************SCnC env
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
But when I start the agent, I get the following error in the logs.
2022-12-26T17:51:49Z I! First time setting retention for log group test-cloudwatch-agent, update map to avoid setting twice
2022-12-26T17:51:49Z E! Failed to get credential from session: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
SharedCredsLoad: failed to load profile, .
EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make EC2Metadata request
I'm using the Administrator user for both the installation of the agent and then when RDPing into the machine. Is there anything I'm missing?
I've already tried adding the credentials to the .aws/credentials file and modifying the common-config.toml file to use a profile. That way it works but in my case I just want to use the environment variables.
EDIT: I tested adding the credentials in the userdata script and modify a bit how they are created and now it seems to work.
$env:aws_access_key_id = "myaccesskeyid"
$env:aws_secret_access_key = "mysecretaccesskey"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
Now the problem is that I'm trying to start the agent at the end of the userdata script with the command from the documentation but it does nothing (I see in the agent logs the command but there is no error). If I RDP into the machine and launch the same command in Powershell it works fine. The command is:
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m onPrem -s -c file:"C:\ProgramData\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent.json"
I finally was able to make it work but I'm not sure of why it didn't before. I was using
$env:aws_access_key_id = "accesskeyid"
$env:aws_secret_access_key = "secretkeyid"
[System.Environment]::SetEnvironmentVariable('AWS_ACCESS_KEY_ID',$env:aws_access_key_id,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_SECRET_ACCESS_KEY',$env:aws_secret_access_key,[System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('AWS_DEFAULT_REGION','us-east-1',[System.EnvironmentVariableTarget]::Machine)
to set the variables but then the agent was failing to initialize. I had to add
$env:aws_default_region = "us-east-1"
so it was able to run. I couldn't find the issue before because on Windows server 2022 I don't get the logs from the execution. I had to try using Windows Server 2019 to actually see the error when launching the agent.
I still don't know why the environment variables I set in the machine scope worked once logged into the machine but not when using them as part of the userdata script.

Google Cloud Platform: cloudshell - is there any way to "keep" gcloud init configs?

Does anyone know of a way to persist configurations done using "gcloud init" commands inside cloudshell, so they don't vanish each time you disconnect?
I figured out how to persist python pip installs using the --user
example: pip install --user pandas
But, when I create a new configuration using gcloud init, use it for a bit, close cloudshell (or cloudshell times out on me), then reconnect later, the configurations are gone.
Not a big deal, I bounce between projects/etc so it's nice to have the configs saved so I can simply run
gcloud config configurations activate config-name
Thanks...Rich Murnane
Google Cloud Shell only persists data in your $HOME directory. Commands like gcloud init modify the environment variables and store configuration files in /tmp which is deleted when the VM is restarted. The VM is terminated after being idle for 20 minutes or 60 minutes depending on which document you read.
Google Cloud Shell is a Docker container. You can modify the docker image to customize to fit your needs. This method will allow you to install packages, tools, etc that are not located in your $HOME directory.
You can also store your files and configuration scripts on Google Cloud Storage. Modify .bashrc to download your cloud files and run your configuration script.
Either method will allow you to create a persistent environment.
This StackOverflow answer covers in detail what gcloud init does and how to basically emulate the same thing via script or command line.
gcloud init details
this isn't exactly what I wanted, but since my
account (userid) isn't changing, I'm simply going to
do the command
gcloud config set project second-project-name
good enough, thanks...Rich

passing aws creds to kitchen ec2 command line

I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.

Dowloading a file from the packer instance

I am using Packer to create AMI's.
So Packer creates a temporary security group and keypair etc and launches an Insatnce.In my usecase after installing all the packages I need to run some test.Hence the results are generated on the same Instance which was launched by packer.
I want those results.Basically I want to download the results file on the machine which triggered the packer build.Is there a way in packer itself to download any or any other way.
Thanks in Advance for any help.
I Actually did not get any help on this.
The solution I found on this in a hacky way was:
I created a bash script which basically uploads the result.xml(Test result) to S3.
And do not forget to give correct permission to the s3 bucket, so that the packer instance can access it.
And download the result file after the packer run is completed from S3.
For anyone who runs into this today, you can use the direction option to switch to download:
https://www.packer.io/docs/provisioners/file#direction

Not able to Start/Stop Spark Worker from Remote Machine

I have two machines A and B. I am trying to run Spark Master on machine A and Spark Worker on machine B.
I have set machine B's host name in conf/slaves in my Spark directory.
When I am executing start-all.sh to start master and workers, I am getting below message on console:
abc#abc-vostro:~/spark-scala-2.10$ sudo sh bin/start-all.sh
sudo: /etc/sudoers.d is world writable
starting spark.deploy.master.Master, logging to /home/abc/spark-scala-2.10/bin/../logs/spark-root-spark.deploy.master.Master-1-abc-vostro.out
13/09/11 14:54:29 WARN spark.Utils: Your hostname, abc-vostro resolves to a loopback address: 127.0.1.1; using 1XY.1XY.Y.Y instead (on interface wlan2)
13/09/11 14:54:29 WARN spark.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Master IP: abc-vostro
cd /home/abc/spark-scala-2.10/bin/.. ; /home/abc/spark-scala-2.10/bin/start-slave.sh 1 spark://abc-vostro:7077
xyz#1XX.1XX.X.X's password:
xyz#1XX.1XX.X.X: bash: line 0: cd: /home/abc/spark-scala-2.10/bin/..: No such file or directory
xyz#1XX.1XX.X.X: bash: /home/abc/spark-scala-2.10/bin/start-slave.sh: No such file or directory
Master is started but worker is failed to start.
I have set xyz#1XX.1XX.X.X in conf/slaves in my Spark directory.
Can anyone help me to resolve this? This is probably something I'm missing any configuration on my end.
However when I create Spark Master and Worker on same machine, It is working fine.
Have you copied all Spark's files at the worker too? Also you need to setup password less access b/w master and worker.
Here were steps I would follow,
Setting up public key authentication over SSH
Checking /etc/spark/conf.dist/spark-env.sh
scp this to your computer B from computer A (master)
Set conf/slaves, hostname for computer B
./start-all.sh
For standalone cluster mode, you may set these option in spark-env.sh.
For example,
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=4G
see SSH ACCESS, in hadoop multinode cluster setup by michael. just like that .... will solve ur probs..
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/