My Jenkins Setup works as follows:
Takes checkout of code from Github
Publishes the code successfully to my S3 bucket.
The IAM user I have configured for it has full access permission to S3.
But the problem occurs if I delete a file/directory, it updates all the files in my s3 bucket but doesn't removes the deleted files/directories. Is Deleting files/directories not possible by Jenkins S3 plugin?
S3 plugin removes files on onDelete event. Jenkins creates this event when it goes to remove build from history (due to rotation or something like that). Uploading works only as uploading - not updating.
Related
I am trying to connect the content of a static Website in an S3 bucket to a CodeCommit repo via CodeDeploy.
However, when I set up a repo via CodeCommit and a CodeDeploy Pipeline and when I push changes to my S3 bucket of my HTML file, the static HTML page doesn't loadinstead my browser screen either briefly flashes or it instead downloads the HTML file.
I know I have the S3 bucket configured correctly because when I test my .html file via it's public URL, it loads as expect.
Addtionally, when I download my HTML file via my S3 bucket BEFORE I push commit changes, the same file downloads. However, when I download the newly committed HTML file from S3, it's corrupted. Which makes me think it's an issue in how I've configured CodeDeploy, but can't figure it out.
I believe I have the header information configured correctly
The S3 Bucket policy Bucket policy allows for reading of objects. CodePipeline successfully pushes my repo changes to my S3 Bucket. But for some reason, even through S3 still sees the file type as HTML, it's not configuring as such after a push from CodeDeploy. Additionally, when I try to download the new pushed HTML file and open it, the HTML code is all jumbled.
Any help or insights is appreciated.
Eureka! I found the solution (by accident).
Just in case others run into this problem: When configuring a deployment pipeline in CodePipeline, if you don't select "Extract file before deploy" in your deployment configuration step, CodePipeline will instead deploy any code commit HTML files (and I assume other files types as well) as "octet-streams". Enabling "Extract file before deploy" fixed this problem.
Now I will be able to finally sleep tonight!
Is there any way I can monitor a S3 bucket for any new files added to it using boto3? Once a new file is added to the S3 bucket, it needs to be downloaded.
My Python code needs to run on an external VMC Server, which is not hosted on an AWS EC2 instance. Whenever a vendor will push a new file to our public S3 bucket, I need to download those files to this VMC Server for ingestion in our on-prem databases/servers. I can't access the VMC Server from AWS either, and neither is there any webhook available.
I have written the code for downloading the files, however, how can I monitor a S3 bucket for any new files?
Take a look at S3 Event Notifications: https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
I want to delete S3 files from the bucket by creating a bamboo plan/script which when run will delete the file.
I tried creating a plan and then creating a task. But in the task i can see no option for Amazon S3 Object in the list.
I have refered to the below url and followed the steps:
https://utoolity.atlassian.net/wiki/spaces/TAWS/pages/19464196/Using+the+Amazon+S3+Object+task+in+Bamboo
Is there any other way i can create a bamboo plan and delete files from S3???
The link in the question is to a paid 3rd party Bamboo plugin (link here) and not installed by default.
You currently have 2 options for AWS and Bamboo Integration:
Purchase the Tasks for AWS Bamboo plugin.
Create a script task that uses the AWS S3 API or AWS SDK to achieve what you are trying to do (Amazon's REST Delete S3 Object).
I am running a website on AWS S3 bucket. I have to update the website once in a while. At the moment, when I do the deployment I just copy the built files to my bucket and override existing ones.
Is there a way to do some versioning on these deployments? I know there is a built-in versioning S3, but it is only for individual files I think.
The best option would be that every deployment is tagged with git commit-id and I could rollback to a particular commit-id if needed.
Any ideas? Already tried to name directories with commit-id -prefix, but the problem is that index.html has to live in root dir.
If you want to use some solution for non-technical users who could rollback the previous version just by doing some clicking action in AWS console you can try to change Index document config.
For example, you have structure in bucket like this:
bucket/v1/index.html
bucket/v2/index.html
...
bucket/vN/index.html
It means, that you could only change the config in Bucket properties -> Static website hosting -> Index document -> from v2/index.html to v1/index.html.
Sounds like "just doing some clicking action in AWS console"
You can use AWS CodePipeline, and use git reverts to manage rollbacks. See this github repository for a cloudformation stack to set up a website on s3/cloudfront with something like this in place.
You can configure bucket versioning using any of the following methods:
Configure versioning using the Amazon S3 console.
Configure versioning programmatically using the AWS SDKs
Both the console and the SDKs call the REST API Amazon S3 provides to manage versioning.
Note
If you need to, you can also make the Amazon S3 REST API calls directly from your code. However, this can be cumbersome because it requires you to write code to authenticate your requests.
Each bucket you create has a versioning subresource (see Bucket Configuration Options) associated with it. By default, your bucket is unversioned, and accordingly the versioning subresource stores empty versioning configuration.
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
</VersioningConfiguration>
To enable versioning, you send a request to Amazon S3 with a versioning configuration that includes a status.
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Enabled</Status>
</VersioningConfiguration>
To suspend versioning, you set the status value to Suspended.
More information here.
I'm using AWS to host a static website. Unfortunately, it's very tedious to upload the directory to S3. Is there any way to streamline the process?
Have you considered using AWSCLI - AWS Command Line Interface to interact with AWS Services & resources.
Once you install and configure the AWSCLI; to update the site all that you need to do is
aws s3 sync s3://my-website-bucket /local/dev/site
This way you can continue developing the static site locally and a simple aws s3 sync command line call would automatically look at the files which have changed since the last sync and automatically uploads to S3 without any mess.
To make the newly created object public (if not done using Bucket Policy)
aws s3 sync s3://my-website-bucket /local/dev/site --acl public-read
The best part is, the multipart upload is built in. Additionally you sync back from S3 to local (the reverse)