What is the type and files in artifacts, aws buildspec yaml file - amazon-web-services

I am a noob. What is the 'artifacts' in the buildsepc yaml file?
I read on https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started-create-build-spec-console.html,
"Artifacts represents the set of build output artifacts that CodeBuild uploads to the output bucket. files represents the files to include in the build output."
Maybe, I am not understanding it correctly. Given the settings in the screenshot above, I expect two zipped files(template.yml and outputtemplate.yml) to be uploaded to the output bucket, say BUCKET=MYBUCKET.
But, when I check my S3 bucket after I build and deploy it, I have 2 files named like c7e84f72729709f7a0.
Also, just to understand what's going on, I tried removing the lines: 'type: zip' and '-template.yml', and built and deployed again. I expected only 1 file since I removed line 8 and 10. But, the result was still two files sitting in my bucket. What exactly are the artifacts? And What is the type?(this I can't even find any documentations). Why is the type in most cases, if not always, zip? when in fact the uploaded file is not actually not a zip file?
Thank you.

The file c7e84f72729709f7a0 is your zip file. It will contain both yml files. Just unpack it as any other zip file. May need to add extension .zip if required by your unpacking software.
I don't know where the type: zip comes from. The reference docs for the buldspec.yml do not document such a field.
And artifacts are outcomes files of your build. For example, when you are building a C++ project, it would be executable or library files resulting from compilation of your C++ source code.
The artifices are also carried over to a next stage of your CI/CD pipeline, such as integration testing or deployment with CodeDeploy.

Just to add to Marcin's answer, the type property of artifacts was deprecated in version 0.2 (the one that's being used here). You can see the changes across buildspec versions at the bottom of this page.

Related

Apple script to move files of a type

I have a folder of approximately 1800~ .7z files. Some of them uncompress as a single file, some uncompress as a folder with several of the same type of file. If I were to uncompress all of them at the same time, what I end up with is a folder full of .7z files, folders containing multiple of a file type, and multiple single instances of that same file type.
What I would like to do is run a script that would take all of the same file types, from all containing folders below the main folder, and copy them to another specified folder. I unfortunately don't have really any experience with Apple Scripts and while this may be simple, it sounds insurmountable to me. Any input would be appreciated.

Build AWS Java Lambda with Gradle, use shadowJar or buildZip for archive to upload?

Description
I am developing AWS Java Lambdas, with Gradle as my build tool.
AWS requires a "self-contained" Java archive (.jar, .zip, ...) to be uploaded, which has to include everything, my source code, the dependencies etc.
There is the Gradle plugin shadow for this purpose, it can be included like this:
import com.github.jengelman.gradle.plugins.shadow.transformers.Log4j2PluginsCacheFileTransformer
...
shadowJar {
archiveName = "${project.name}.jar"
mergeServiceFiles()
transform(Log4j2PluginsCacheFileTransformer)
}
build.dependsOn shadowJar
gradle build produces a file somefunction.jar, in my case it is 9.5MB in size.
AWS documentation suggests to
putting your dependency .jar files in a separate /lib directory
There are specific instructions how to do this on Creating a ZIP Deployment Package for a Java Function.
task buildZip(type: Zip) {
archiveName = "${project.name}.zip"
from compileJava
from processResources
into('lib') {
from configurations.runtimeClasspath
}
}
build.dependsOn buildZip
gradle build produces a file build/distributions/somefunction.zip, in my case it is 8.5MB in size.
Both archives, zip and jar, can be upload to AWS and run fine. Performance seems to be the same.
Question
Which archive to favor, Zip or (shdow)Jar?
More specific questions, which come to my mind:
AWS documetation says "This [putting your dependency .jar files in a separate /lib directory] is faster than putting all your function’s code in a single jar with a large number of .class files." Does anyone know, what exactly is faster? Build-time? Cold/warm start? Execution time?
When build the Zip, I am not using the shadowJar features mergeServiceFiles() and Log4j2PluginsCacheFileTransformer. Not using mergeServiceFiles should in worst case decrease the execution time. As long as I omit Log4j2 plugins, I can omit Log4j2PluginsCacheFileTransformer. Right?
Are there any performance considerations using the one or the other?

flatpak-builder with local sources and dependancies

How I can build local sources and dependancies with flatpak-builder?
I can build local sources
flatpak build ../dictionary ./configure --prefix=/app
I can extract and build application with dependancies with a .json
flatpak-builder --repo=repo dictionary2 org.gnome.Dictionary.json
But no way to build dependancies and local sources? I don't find sources type
like dir or other, only archive, git (no hg?) ...
flatpak-builder is meant to automate the whole build process, with a single entry-point: the JSON manifest.
Everything else it obtains from Git, Bazaar or tarballs. Note that for these the "url" property may be a local URL starting with file://.
(There is indeed no support for Hg. If that's important for you, feel free to request it.)
In addition to that, there are a few more source types (see the flatpak-manifest(5) manpage), which can be used to modify the extracted sources:
file which point to a local file to copy somewhere in the extracted sources;
patch which point to a local patch file to apply to the extracted sources;
script which creates a script in the extracted sources, from an array of commands;
shell which modifies the extracted sources by running an array of commands;
Adding a dir source type might be useful.
However (and I only flatpaked a few apps, and contributed 2 or 3 patches to the code, so I might be completely wrong) care must be taken as this would easily make builds completely unreproducible, which is one thing flatpak-builder tries very hard to enable.
For example, when using a local file source, flatpak-builder will base64-econde the content of that file and use it as a data:text/plain;charset=utf8;base64,<content> URL for the file which it stores in the manifest included inside the final build.
Something similar might be needed for a dir source (tar the folder then base64-encode the content of the tar?), otherwise it would be impossible to reproduce the build. I've just been told (after submitting this answer) that this changed in Git master, in favour of a new flatpak-builder --bundle-sources option. This would probably make it easier to support reproducible builds with a dir source type.
In any case, feel free to start the conversation around a new dir source type in the upstream bug tracker. :)
There's a expermental cli tool if you want to use it https://gitlab.com/csoriano/flatpak-dev-cli
You can read the docs
http://docs.flatpak.org/en/latest/building-simple-apps.html
http://docs.flatpak.org/en/latest/flatpak-builder.html
In a nutshell this is what you need to use flatpak as develop workbench
https://github.com/albfan/gnome-builder/wiki/flatpak

AWS Versioning - batch/mass restore to last version that isn't 0kb?

I made a very foolish error with a large image directory on our server which is mounted via S3FS to an EC2 instance and I ran Image_Optim on it. It seemed to do a good job until I noticed missing files on the website, which when I looked id noticed were files which had been left at 0kb...
...Now fortunately I have versioning on and a quick look seems to show at the exact same time on all the 0kb files is the correct version as well.
It has happened to about 1300 files in a 2500 directory. Question is, is it possible for me to batch process all the 0kb files and tell them to restore to the latest version that is bigger than 0kb??
The only batch restore tool I can find is S3 Browser but that causes you to restore all files in a folder to their latest version. In some cases this would work for the 0kb files but for many it won't, I also don't own the program so would rather do it with a command line script if possible.
Once your file(s) have become 0 bytes or 0kb, you cannot recover them, well at least not easily. If you mean restore / recover from ext. Backup then that will work.

FinalBuilder Enumeration of files and folders

What's the best way to enumerate a set of files and folders using FinalBuilder?
The context of my question is, I want to compare a source folder with a destination folder, and replace any matching files in the destination folder that are older than the source folder.
Any suggestions?
ok, for future reference, it turns out that under the catgeory "Iterators" there are two very helpful actions.
File/Fileset Iterator
Folder Iterator
Further digging revealed the Robocopy Mirror action, which does exactly what I was looking for, namely syncing the destination folder with the source folder. No need to write my own file iteration routines.