Jenkins copy directories/files in a build - build

I am trying to copy files out to a network directory during a build, and I keep getting a " No such file or directory" error message.
Copying to local drive works fine:
cp -Rf c:/Jenkins/deployments/TW_ISSUE_A/src c:/Jenkins/deployments/TW_ISSUE_A/target
The following all throw the same message:
cp -Rf c:/Jenkins/deployments/TW_ISSUE_A/src H:/some_dir
cp -Rf c:/Jenkins/deployments/TW_ISSUE_A/src H:\some_dir
cp -Rf c:/Jenkins/deployments/TW_ISSUE_A/src //Hubbell/MISGenl/some_dir
cd c:/Jenkins/deployments/TW_ISSUE_A/src
rsync -avuzb //Hubbell/MISGenl/Projects/Tronweb/TronwebBuilds/test/ora/sql/
cp -Rf c:/Jenkins/deployments/TW_ISSUE_A/src /cygdrive/h/some_dir
I've even created a shell script to call from Jenkins, but I continue to receive that message.
#!/bin/bash
url="http://as-test02:8080/job/TW_ISSUE_A_BUILD/lastSuccessfulBuild/artifact/bui
ld-TW_ISSUE_A_BUILD.tar";
remote_stage_dir="/cygdrive/h/some_dir"
#fetch the artifacts
(cd "$remote_stage_dir" && wget "$url" && tar xvf build-TW_ISSUE_A_BUILD.tar dat
java ora && rm -rf *.tar && cp -r ./ora/* ../INTEGRATION)
Is there any way to copy files out to a mapped drive on the build machine?
Thank you!!

I would guess the mapped drive isn't available in the services context, or that the user executing Jenkins doesn't have access to it. What user is Jenkins running as?
Edit:
I think your problem has two aspects:
The user running the Jenkins service isn't allowed to connect to the network.
h: isn't known to the user.
If you haven't modified it, the service is most likely running under the LocalSystem account. You can modify this by running services.msc (or navigate to services via the Windows control panel) and locating the jenkins service. This should resolve the first problem.
The second problem can be resolved by using UNC paths (as you tried above) instead of network drives.
The Jenkins wiki has an article about problems like this: My software builds on my computer but not on Jenkins

Related

Trying to download organization data to an external drive

I am trying to backup all of our Google Cloud data to an external storage device.
There is a lot of data so I am attempting to download the entire bucket at once and am using the following command to do so, but it halts saying that there isn't enough storage on the device to complete the transfer.
gsutil -m cp -r \
"bucket name" \
.
What do I need to add to this command to download this information to my local D: drive? I have searched through the available docs and have not been able to find the answer.
I used the gsutil command that GCP provided for me automatically, but it seems to be trying to copy the files to a destination without enough storage to hold the needed data.
Remember that you are running the command from the Cloud Shell and not in a local terminal or Windows Command Line. If you inspect the Cloud Shell's file system/structure, it resembles more that of a Unix environment in which you can specify the destination like such instead: ~/bucketfiles/. Even a simple gsutil -m cp -R gs://bucket-name.appspot.com ./ will work since Cloud Shell can identify the ./ directory which is the current directory.
A workaround to this is to perform the command on your Windows Command Line. You would have to install Google Cloud SDK beforehand.
Alternatively, this can also be done in Cloud Shell, albeit with an extra step:
Download the bucket objects by running gsutil -m cp -R gs://bucket-name ~/ which will download it into the home directory in Cloud Shell
Transfer the files downloaded in the ~/ (home) directory from Cloud Shell to the local machine either through the User Interface or by running gcloud alpha cloud-shell scp.

AWS Elastic Beanstalk unable to deploy a working version

Elastic Beanstalk is infinitely copying a file to the /tmp folder that I created with a config file in .ebextensions. The name of this file is /tmp/mount-efs.sh. This file causes an issue on initialisation of an environment. So I try to get rid of it or at least change the content of it.
I tried already:
deploy an older version, that is not having this file.
Result: The ec2 instance not get deleted, so the file is still there
Upload the zip instead of using the application version
Result: The ec2 instance not get deleted, so the file is still there
delete the file from /tmp/mount-efs.sh
Result: The file immediatly reappears again and its ".bak" file too
Removed the '.config' file from /var/app/staging/.ebextensions/
Result: Same error and the file mount-efs.sh is still created in /tmp folder
I think Elastik Beanstalk is stuck with a version that it thinks works. But the version has an issue. And EB does not allow me to deploy a different version (older or newer).
The stranger thing is, that the version, that EB every time fallback to, did not have the file in the .ebextensions.
I also tried to rebuild the environment.
Result: Fallback is loaded, file is there, issue happens.
from eb-engine.log:
Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-west-2:xxxxxxxxxxxx:stack/awseb-e-xxxxxxxxxxx-stack/nnnnnnnn-nnnn-nnnn-nnnn-xxxxxxxxxxxx -r AWSEBAutoScalingGroup --region us-west-2 --configsets Infra-EmbeddedPreBuild
2022/07/14 20:31:13.403626 [INFO] Error occurred during build: Command 01_mount failed
2022/07/14 20:31:13.403667 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
This error happens every 5 sec. So EB is in an infinite loop here.
So I want to get rid of the /tmp/mount-efs.sh file, or that the content of /tmp/mount-efs.sh is different. I want to do this directly via ssh on the ec2 instance it self.
So my understanding is, that EB runs the config files that I added in .ebextensions. In this files there are files created in the /tmp folder. This files in the /tmp folder run on initialization.
So what file I have to change, so that the changes are recognized in the file, that is created in the /tmp folder (without deployment)?
Or can I stop the initialization loop somehow?
The infinity loop happens because of a command that calls a file in /var/www/html that did not exist. Why this file did not exist is a riddle for me. The whole /var/www/html folder was empty. Normally elastic beanstalk should do the stuff before running the commands, but this is not the case. (create app folder and staging, unzip the source code into staging, copy it into the app/current folder, and create a symlink to the app/current folder)
I was able to solve the issue with the infinity loop by doing the following:
sudo mkdir -p /var/app/staging
cd $_
sudo unzip /opt/elasticbeanstalk/deployment/app_source_bundle
sudo cp -rpv /var/app/staging /var/app/current
sudo rm -rf /var/www/html
sudo ln -s /var/app/current /var/www/html
mkdir -p: creates the directories with parent. so if "app" not exists it will be created before "staging" will be created
$_: Reference to the last folder "in action". here this was /var/app/staging
unzip: unzip the source bundle code into staging
cp -rp: copy recursively (r) and keep ownership and timestamps (p) from "staging" into "current"
rm -rf /var/www/html: deletes the existing HTML folder. Be careful with this command what you delete!
ln -s : creates a symbolic link from /var/www/html to /var/app/current

Jenkins - bash: aws: command not found but runs fine from terminal

In Build Step, I've added Send files or execute command over SSh -> SSH Publishers -> Exec command, I'm trying to run aws command to copy file from ec2 to s3. The same command runs fine when I execute it over the terminal, but via jenkins it simply returns:
bash: aws: command not found
The command is
cd ~/.local/bin/ && aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
Based on the comments.
The solution was to use the following command:
cd ~/.local/bin/ && ./aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
since aws is not available in PATH env variable.
command not found indicates that the aws utility is not on $PATH for the jenkins user.
To confirm, sudo su -l jenkins and then issue the command which aws - this will most likely return no results.
You have two options:
use the full path (likely /usr/local/bin/aws)
add /usr/local/bin to the jenkins user's $PATH
I need my Makefile to work in both Linux and Windows so the accepted answer is not an option for me.
I diagnosed the problem by adding the following to the top of my build script:
whoami
which aws
env|grep PATH
This returned:
root
which: no aws in (/sbin:/bin:/usr/sbin:/usr/bin)
PATH=/sbin:/bin:/usr/sbin:/usr/bin
Bizarrely, the path does not include /usr/local/bin, even though the interactive shell on the Jenkins host includes it. The fix is simple enough, create a symlink on the Jenkins host:
ln -s /usr/local/bin/aws /bin/aws
Now the aws command can be found by scripts running in Jenkins (in /bin).

How to download an entire bucket in GCP?

I have a problem downloading entire folder in GCP. How should I download the whole bucket? I run this code in GCP Shell Environment:
gsutil -m cp -R gs://my-uniquename-bucket ./C:\Users\Myname\Desktop\Bucket
and I get an error message: "CommandException: Destination URL must name a directory, bucket, or bucket subdirectory for the multiple source form of the cp command. CommandException: 7 files/objects could not be transferred."
Could someone please point out the mistake in the code line?
To download an entire bucket You must install google cloud SDK
then run this command
gsutil -m cp -R gs://project-bucket-name path/to/local
where path/to/local is your path of local storage of your machine
The error lies within the destination URL as specified by the error message.
I run this code in GCP Shell Environment
Remember that you are running the command from the Cloud Shell and not in a local terminal or Windows Command Line. Thus, it is throwing that error because it cannot find the path you specified. If you inspect the Cloud Shell's file system/structure, it resembles more that of a Unix environment in which you can specify the destination like such instead: ~/bucketfiles/. Even a simple gsutil -m cp -R gs://bucket-name.appspot.com ./ will work since Cloud Shell can identify the ./ directory which is the current directory.
A workaround to this issue is to perform the command on your Windows Command Line. You would have to install Google Cloud SDK beforehand.
Alternatively, this can also be done in Cloud Shell, albeit with an extra step:
Download the bucket objects by running gsutil -m cp -R gs://bucket-name ~/ which will download it into the home directory in Cloud Shell
Transfer the files downloaded in the ~/ (home) directory from Cloud Shell to the local machine either through the User Interface or by running gcloud alpha cloud-shell scp
Your destination path is invalid:
./C:\Users\Myname\Desktop\Bucket
Change to:
/Users/Myname/Desktop/Bucket
C: is a reserved device name. You cannot specify reserved device names in a relative path. ./C: is not valid.
There is not a one-button solution for downloading a full bucket to your local machine through the Cloud Shell.
The best option for an environment like yours (only using the Cloud Shell interface, without gcloud installed on your local system), is to follow a series of steps:
Downloading the whole bucket on the Cloud Shell environment
Zip the contents of the bucket
Upload the zipped file
Download the file through the browser
Clean up:
Delete the local files (local in the context of the Cloud Shell)
Delete the zipped bucket file
Unzip the bucket locally
This has the advantage of only having to download a single file on your local machine.
This might seem a lot of steps for a non-developer, but it's actually pretty simple:
First, run this on the Cloud Shell:
mkdir /tmp/bucket-contents/
gsutil -m cp -R gs://my-uniquename-bucket /tmp/bucket-contents/
pushd /tmp/bucket-contents/
zip -r /tmp/zipped-bucket.zip .
popd
gsutil cp /tmp/zipped-bucket.zip gs://my-uniquename-bucket/zipped-bucket.zip
Then, download the zipped file through this link: https://storage.cloud.google.com/my-uniquename-bucket/zipped-bucket.zip
Finally, clean up:
rm -rf /tmp/bucket-contents
rm /tmp/zipped-bucket.zip
gsutil rm gs://my-uniquename-bucket/zipped-bucket.zip
After these steps, you'll have a zipped-bucket.zip file in your local system that you can unzip with the tool of your choice.
Note that this might not work if you have too much data in your bucket and the Cloud Shell environment can't store all the data, but you could repeat the same steps on folders instead of buckets to have a manageable size.

Where can I find the folder which I downloaded from gcloud bucket

By using gcloud shell I have downloaded all my bucket but i couldn't find the downloaded files.
I used the command
gsutil -m cp -R gs://bucket/* .
P.S. Please don't make -1 on that post if I asked something wrong let me know in comments and I will learn how to ask a question correctly and save your time. Thanks
You used the command gsutil cp, as documented here:
https://cloud.google.com/storage/docs/gsutil/commands/cp
The parameters for this command are:
gsutil cp [OPTION]... src_url dst_url
So you used Option gsutil -m for to perform a parallel (multi-threaded/multi-processing) copy.
Then you also added -R to traverse all directories in your bucket
As "destination URL" you entered a "." which specified the current working directory.
So your files should be located in your home directory, or in any directory where you switched to using the cd command inside your command window.
It would download to the directory you were in when you ran the command. If you never changed the directory using $cd ... command, then it should be at the root. On a Mac, that would be Macintosh > Users > YourName.