Upload a FOLDER/DIRECTORY on aws s3 using sync command with wildcards - amazon-web-services

I want to upload only directories whose names are ending with 0,1 and 2 on AWS using AWS s3 sync command. for example, I want to upload a directory name anyname0.
I know I can do this thing with files ending with a specific extension like this
aws s3 sync 'D:\TestData\Actual Data' s3://destination-AWSDOC-EXAMPLE-BUCKET/ --exclude "*" --include "*0.txt" --include "*1.txt" --include "*2.txt"
But is there any way to apply this thing on directories also? Please help.

You can accomplish this example using the following pattern
aws s3 sync 'D:\TestData\Actual Data' \
s3://destination-AWSDOC-EXAMPLE-BUCKET/ \
--exclude "*" \
--include "*anyname0/**"
Example
$ tree
.
├── anyname0
│   ├── 6.txt
│   ├── 7.txt
│   ├── 8.txt
│   └── 9.txt
├── anyname1
│   ├── 6.txt
│   ├── 7.txt
│   ├── 8.txt
│   └── 9.txt
├── anyname2
│   ├── 6.txt
│   ├── 7.txt
│   ├── 8.txt
│   └── 9.txt
├── anyname3
│   ├── 6.txt
│   ├── 7.txt
│   ├── 8.txt
│   └── 9.txt
└── anyname4
├── 6.txt
├── 7.txt
├── 8.txt
└── 9.txt
$ aws s3 sync ./ s3://my-bucket --exclude "*" --include "*anyname0/**" --dryrun
(dryrun) upload: anyname0/6.txt to s3://my-bucket/anyname0/6.txt
(dryrun) upload: anyname0/7.txt to s3://my-bucket/anyname0/7.txt
(dryrun) upload: anyname0/8.txt to s3://my-bucket/anyname0/8.txt
(dryrun) upload: anyname0/9.txt to s3://my-bucket/anyname0/9.txt
Can also do multiple patterns
$ aws s3 sync ./ s3://my-bucket --exclude "*" \
--include "*anyname0/**" \
--include "*anyname3/**" \
--dryrun
(dryrun) upload: anyname0/6.txt to s3://my-bucket/anyname0/6.txt
(dryrun) upload: anyname0/7.txt to s3://my-bucket/anyname0/7.txt
(dryrun) upload: anyname0/8.txt to s3://my-bucket/anyname0/8.txt
(dryrun) upload: anyname0/9.txt to s3://my-bucket/anyname0/9.txt
(dryrun) upload: anyname3/6.txt to s3://my-bucket/anyname3/6.txt
(dryrun) upload: anyname3/7.txt to s3://my-bucket/anyname3/7.txt
(dryrun) upload: anyname3/8.txt to s3://my-bucket/anyname3/8.txt
(dryrun) upload: anyname3/9.txt to s3://my-bucket/anyname3/9.txt

Related

How to solve dependency problem in Integration Project when building with maven?

I have an integration project, created with wso2 IntegrationStudio. It consists of two modules:
file-integration-01-configs for ESB configs
file-integration-01-exporter for the composite exporter
Here's the content of the project:
├── file-integration-01-configs
│   ├── artifact.xml
│   ├── pom.xml
│   ├── src
│   │   └── main
│   │   └── synapse-config
│   │   ├── api
│   │   ├── endpoints
│   │   ├── inbound-endpoints
│   │   ├── local-entries
│   │   ├── message-processors
│   │   ├── message-stores
│   │   ├── proxy-services
│   │   │   └── file-proxy.xml
│   │   ├── sequences
│   │   │   └── file-write-sequence.xml
│   │   ├── tasks
│   │   └── templates
│   └── test
│   └── resources
│   └── mock-services
├── file-integration-01-exporter
│   └── pom.xml
├── pom.xml
I try to build and package the project on the command line
# in the root of the project
$ mvn clean
$ mvn package
The configs compile just fine. But the composite exporter modules fails with the following error:
[INFO] --------------< com.example:file-integration-01-exporter >--------------
[INFO] Building file-integration-01-exporter 1.0.0 [2/3]
[INFO] -------------------------[ carbon/application ]-------------------------
[WARNING] The POM for com.example.sequence:file-write-sequence:xml:1.0.0 is missing, no dependency information available
[WARNING] The POM for com.example.proxy-service:file-proxy:xml:1.0.0 is missing, no dependency information available
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for file-integration-01 1.0.0:
[INFO]
[INFO] file-integration-01-configs ........................ SUCCESS [ 7.335 s]
[INFO] file-integration-01-exporter ....................... FAILURE [ 0.209 s]
[INFO] file-integration-01 ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.815 s
[INFO] Finished at: 2021-03-23T09:52:14+01:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project file-integration-01-exporter:
Could not resolve dependencies for project com.example:file-integration-01-exporter:carbon/application:1.0.0: T
he following artifacts could not be resolved:
com.example.sequence:file-write-sequence:xml:1.0.0,
com.example.proxy-service:file-proxy:xml:1.0.0:
Failure to find com.example.sequence:file-write-sequence:xml:1.0.0 in http://maven.wso2.org/nexus/content/groups/wso2-public/ was cached in
the local repository, resolution will not be reattempted until the update
interval of wso2-nexus has elapsed or updates are forced ->
The file file-integration-01-configs/artifact.xml is as follows:
<?xml version="1.0" encoding="UTF-8"?><artifacts>
<artifact name="file-proxy" groupId="com.example.proxy-service" version="1.0.0" type="synapse/proxy-service" serverRole="Enterp
riseServiceBus">
<file>src/main/synapse-config/proxy-services/file-proxy.xml</file>
</artifact>
<artifact name="file-write-sequence" groupId="com.example.sequence" version="1.0.0" type="synapse/sequence" serverRole="Enterpr
iseServiceBus">
<file>src/main/synapse-config/sequences/file-write-sequence.xml</file>
</artifact>
</artifacts>
Beginning of the file-integration-01-exporte/pom.xml, including dependency information:
?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.
0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>file-integration-01-exporter</artifactId>
<version>1.0.0</version>
<packaging>carbon/application</packaging>
<name>file-integration-01-exporter</name>
<description>file-integration-01-exporter</description>
<properties>
<com.example.proxy-service_._file-proxy>capp/EnterpriseServiceBus</com.example.proxy-service_._file-proxy>
<com.example.sequence_._file-write-sequence>capp/EnterpriseServiceBus</com.example.sequence_._file-write-sequence>
<artifact.types>jaggery/app=zip,synapse/priority-executor=xml,synapse/inbound-endpoint=xml,service/rule=aar,synapse/message-store=xml,event/stre
am=json,service/meta=xml,datasource/datasource=xml,synapse/proxy-service=xml,bpel/workflow=zip,synapse/sequence=xml,synapse/endpointTemplate=xml,car
bon/application=car,wso2/gadget=dar,synapse/api=xml,synapse/event-source=xml,synapse/message-processors=xml,event/receiver=xml,lib/dataservice/valid
ator=jar,synapse/template=xml,synapse/endpoint=xml,lib/carbon/ui=jar,lib/synapse/mediator=jar,event/publisher=xml,synapse/local-entry=xml,synapse/ta
sk=xml,webapp/jaxws=war,registry/resource=zip,synapse/configuration=xml,service/axis2=aar,synapse/lib=zip,synapse/sequenceTemplate=xml,event/executi
on-plan=siddhiql,service/dataservice=dbs,web/application=war,lib/library/bundle=jar</artifact.types>
</properties>
<dependencies>
<dependency>
<groupId>com.example.sequence</groupId>
<artifactId>file-write-sequence</artifactId>
<version>1.0.0</version>
<type>xml</type>
</dependency>
<dependency>
<groupId>com.example.proxy-service</groupId>
<artifactId>file-proxy</artifactId>
<version>1.0.0</version>
<type>xml</type>
</dependency>
</dependencies>
What kind of fix do I have to apply to which of the maven POMs in the project workspace in order to resolve this missing dependency?

Can't open a file inside a google test

I have a file in the same directory as the test.
Since I'm using Bazel as the build system (in Linux) I don't have to create the main and initialize the tests (In fact I don't know where the main function resides when using Bazel).
#include <string>
#include <fstream>
//Other include files
#include "gtest/gtest.h"
TEST(read_file_test, read_file) {
std::string input_file("file_path");
graph gr;
graph_loader loader;
ASSERT_TRUE(loader.load(gr, input_file));
}
The BUILD file for the tests:
cc_test(
name = "loaderLib_test",
srcs = ["loaderLib_test.cpp"],
deps = [
"//src/lib/loader:loaderLib",
"#com_google_googletest//:gtest_main",
],
data = glob(["resources/tests/input_graphs/graph_topology/**"]),
)
The WORKSPACE file:
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "com_google_googletest",
remote = "https://github.com/google/googletest",
tag = "release-1.8.1",
)
The directory structure:
.
├── README.md
├── WORKSPACE
├── bazel-bin -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__/bazel-out/k8-fastbuild/bin
├── bazel-graph_processor -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__
├── bazel-out -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__/bazel-out
├── bazel-testlogs -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__/bazel-out/k8-fastbuild/testlogs
├── resources
│ └── tests
│ └── input_graphs
│ ├── graph_data
│ │ └── G1_data.txt
│ └── graph_topology
│ ├── G1_notFully_notWeakly.txt
│ ├── G2_notFully_Weakly.txt
│ ├── G3_fully_weakly.txt
│ ├── G4_fully_weakly.txt
│ ├── G5_notFully_weakly.txt
│ ├── G6_notFully_weakly.txt
│ └── G7_notFully_notWeakly.txt
├── src
│ ├── lib
│ │ ├── algorithms
│ │ │ ├── BUILD
│ │ │ └── graph_algorithms.hpp
│ │ ├── graph
│ │ │ ├── BUILD
│ │ │ └── graph.hpp
│ │ └── loader
│ │ ├── BUILD
│ │ └── graph_loader.hpp
│ └── main
│ ├── BUILD
│ └── main.cpp
├── tests
│ ├── BUILD
│ ├── algorithmsLib_test.cpp
│ ├── graphLib_test.cpp
│ └── loaderLib_test.cpp
└── todo.md
So how should I reference the file if it's in the same folder as the test or any other folder?
BTW: Using the full path from the root of my file systems works fine.
The problem is that the glob is relative to the BUILD file like when you refer a file in srcs. The glob is expanding to an empty list because no file is matching. The easiest way to see it is putting the full path explicitly instead of the glob and Bazel will complain.
You have two options, or you move the resources folder inside the tests folder or you create a BUILD file in the resources folders where you create a filegroup exposing the txt files and then in the test target you reference the filegroup target.
https://docs.bazel.build/versions/master/be/general.html#filegroup
In resources/BUILD
filegroup(
name = "resources",
srcs = glob(["tests/input_graphs/graph_topology/**"]),
visibility = ["//visibility:public"],
)
In tests/BUILD
cc_test(
name = "loaderLib_test",
srcs = ["loaderLib_test.cpp"],
deps = [
"//src/lib/loader:loaderLib",
"#com_google_googletest//:gtest_main",
],
data = ["//resources"],
)

Using outputs from other tf files in terraform

Is there a way of using output values of a module that is located in another folder? Imagine the following environment:
tm-project/
├── lambda
│   └── vpctm-manager.js
├── networking
│   ├── init.tf
│   ├── terraform.tfvars
│   ├── variables.tf
│   └── vpc-tst.tf
├── prd
│   ├── init.tf
│   ├── instances.tf
│   ├── terraform.tfvars
│   └── variables.tf
└── security
└── init.tf
I want to create EC2 instances and place them in a subnet that is declared in networking folder. So, I was wondering if by any chance I could access the outputs of the module I used in networking/vpc-tst.tf as the inputs of my prd/instances.tf.
Thanks in advances.
You can use a outputs.tf file to define the outputs of a terraform module. Your output will have the variables name such as the content below.
output "vpc_id" {
value = "${aws_vpc.default.id}"
}
These can then be referenced within your prd/instances.tf by referencing the resource name combined with the output name you defined in your file.
For example if you have a module named vpc which uses this module you could then use the output similar to below.
module "vpc" {
......
}
resource "aws_security_group" "my_sg" {
vpc_id = module.vpc.vpc_id
}

Terraform: Passing variable from one module to another

I am creating a Terraform module for AWS VPC creation.
Here is my directory structure
➢ tree -L 3
.
├── main.tf
├── modules
│   ├── subnets
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── vpc
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── variables.tf
3 directories, 12 files
In the subnets module, I want to grab the vpc id of the vpc (sub)module.
In modules/vpc/outputs.tf I use:
output "my_vpc_id" {
value = "${aws_vpc.my_vpc.id}"
}
Will this be enough for me doing the following in modules/subnets/main.tf ?
resource "aws_subnet" "env_vpc_sn" {
...
vpc_id = "${aws_vpc.my_vpc.id}"
}
Your main.tf (or wherever you use the subnet module) would need to pass this in from the output of the VPC module and your subnet module needs to take is a required variable.
To access a module's output you need to reference it as module.<MODULE NAME>.<OUTPUT NAME>:
In a parent module, outputs of child modules are available in expressions as module... For example, if a child module named web_server declared an output named instance_ip_addr, you could access that value as module.web_server.instance_ip_addr.
So your main.tf would look something like this:
module "vpc" {
# ...
}
module "subnets" {
vpc_id = "${module.vpc.my_vpc_id}"
# ...
}
and subnets/variables.tf would look like this:
variable "vpc_id" {}

Trouble launching AWS environment with django

I followed the steps line by line in the documentation, but i keep getting this error:
Your WSGIPath refers to a file that does not exist.
Here is my '.config' file: (except for the appname and the keys)
container_commands:
01_syncdb:
command: "python manage.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: [myapp]/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: [myapp].settings
- option_name: AWS_SECRET_KEY
value: XXXX
- option_name: AWS_ACCESS_KEY_ID
value: XXXX
I googled around and found that someone else had a similar problem and they solved it by editing the 'optionsettings.[myapp]', I don't want to delete something I need, but here is what I have:
[aws:autoscaling:asg]
Custom Availability Zones=
MaxSize=1
MinSize=1
[aws:autoscaling:launchconfiguration]
EC2KeyName=
InstanceType=t1.micro
[aws:autoscaling:updatepolicy:rollingupdate]
RollingUpdateEnabled=false
[aws:ec2:vpc]
Subnets=
VPCId=
[aws:elasticbeanstalk:application]
Application Healthcheck URL=
[aws:elasticbeanstalk:application:environment]
DJANGO_SETTINGS_MODULE=
PARAM1=
PARAM2=
PARAM3=
PARAM4=
PARAM5=
[aws:elasticbeanstalk:container:python]
NumProcesses=1
NumThreads=15
StaticFiles=/static/=static/
WSGIPath=application.py
[aws:elasticbeanstalk:container:python:staticfiles]
/static/=static/
[aws:elasticbeanstalk:hostmanager]
LogPublicationControl=false
[aws:elasticbeanstalk:monitoring]
Automatically Terminate Unhealthy Instances=true
[aws:elasticbeanstalk:sns:topics]
Notification Endpoint=
Notification Protocol=email
[aws:rds:dbinstance]
DBDeletionPolicy=Snapshot
DBEngine=mysql
DBInstanceClass=db.t1.micro
DBSnapshotIdentifier=
DBUser=ebroot
The user who solved that problem deleted certain lines and then did 'eb start'. I deleted the same lines that the original user said they deleted, but when I 'eb start'ed it I got the same exact problem again.
If anybody can help me out, that would be amazing!
I was having this exact problem all day yesterday and I am using ubuntu 13.10.
I also tried deleting the options file under .ebextensions to no avail.
What I believe finally fixed the issue was under
~/mysite/requirements.txt
I double checked what the values were after I was all set and done doing eb init and eb start and noticed they were different from what http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html mentioned at the beginning of the tutorial.
The file was missing the MySQL line when I checked during the WSGIPath problem, I simply added the line :
MySQL-python==1.2.3
and then committed all the changes and it worked.
If that doesn't work for you, below are the .config file settings and the directory structure.
My .config file under ~/mysite/.ebextensions is exactly what was in the tutorial, minus the secret key and access key, you need to replace those with your own:
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: mysite/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: mysite.settings
- option_name: AWS_SECRET_KEY
value: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
- option_name: AWS_ACCESS_KEY_ID
value: AKIAIOSFODNN7EXAMPLE
My requirements.txt:
Django==1.4.1
MySQL-python==1.2.3
argparse==1.2.1
wsgiref==0.1.2
And my tree structure. This starts out in ~/ so if I were to do
cd ~/
tree -a mysite
You should get the following output, including a bunch of directories under .git ( I removed them because there is a lot):
mysite
├── .ebextensions
│   └── myapp.config
├── .elasticbeanstalk
│   ├── config
│   └── optionsettings.mysite-env
├── .git
├── .gitignore
├── manage.py
├── mysite
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── settings.py
│   ├── settings.pyc
│   ├── urls.py
│   ├── wsgi.py
│   └── wsgi.pyc
└── requirements.txt