Uploading a 155 GB file to S3 bucket - amazon-web-services

I'm using S3Express on Windows to upload a 155GB file on to the S3 bucket using the following command:
put E:\<Folder-name>\<File-name>.csv s3://data.<company-name>.org/<Folder-name>/ -mul:100 -t:2
But the upload doesn't seem to start at all. It was stuck at the following:
Max. Threads: 2
Using MULTIPART UPLOADS (part size:100MB)
S3 Bucket: s3:
S3 Folder: data.<company-name>.org/<Folder-name>/
Selecting Files ...
Press 'Esc' to stop ...
Selected Files to Upload: 1 (154.61GB = 166013845848B) - Use [-showfiles] to list files.
Uploading files to S3...
Press 'Esc' to stop ...
[s=Status] [p=in Progress] [e=Errors] [w=Warnings] [k=sKipped] [d=Dupl] [o=Out]
before throwing the following error:
Error initializing upload for file : E:\<Folder-name>\<File-name>.csv
E:\<Folder-name>\<File-name>.csv: com_err:7 - Failed to connect to s3 port 443: Timed out - Failed to retrieve list of active multipart uploads
------------------------------------------------------------------------------
Done.
Errors (1):
E:\<Folder-name>\<File-name>.csv - com_err:7 - Failed to connect to s3 port 443: Timed out - Failed to retrieve list of active multipart uploads
------------------------------------------------------------------------------
Threads: 0 - Transf.: 0B (0B/sec) ET: 0h 25m 30s
Current Bandwidth: 0B/sec (0B left) - Temporary Network Errors: 4
Compl. Files: 0 of 1 (0B of 154.61GB) (0%) - Skip: 0 - Err: 1 (154.61GB)
I'm new to S3Express and AWS in general.
Any help would be much appreciated.
TIA.

Remove s3:// from the beginning of the destination. According to the docs, the format is bucket_name/object_name. There is no s3:// prefix.
These lines should have been a clue to you that something is wrong with your invocation:
S3 Bucket: s3:
S3 Folder: data.<company-name>.org/<Folder-name>/

Related

Permission issue when deploying Serverless Python app

I have a Serverless Python app and I am trying to deploy it using sls deploy. The serverless.yml is as follows.
service: update-register
provider:
name: aws
runtime: python3.8
profile: haumi
stage: ${opt:stage, 'staging'}
environment: ${file(environment.yml):${self:provider.stage}}
region: ${self:provider.environment.REGION}
iamRoleStatements:
# para poder leer y escribir en el bucket
- Effect: "Allow"
Action:
- "sqs:SendMessage"
Resource: "*"
functions:
update:
handler: handler.update
events:
- sqs: ${self:provider.environment.AWS_SQS_QUEUE}
The handler file is like this:
def update(event, context):
print("=== event: ", event)
However when I try to deploy and trigger the update function the following error appears in AWS Cloudwatch
[ERROR] PermissionError: [Errno 13] Permission denied: '/var/task/handler.py'
Traceback (most recent call last):
File "/var/lang/lib/python3.8/imp.py", line 300, in find_module
with open(file_path, 'rb') as file:
I tried changing the permissions of this file but I can't. Any ideas?
This issue had nothing to do with Serverless but with the permissions of my mounted NTFS partition in Ubuntu 18.04.
tl;dr
Change /etc/fstab to of the mounted partition to
UUID=8646486646485957 /home/Data ntfs defaults,auto,umask=002,uid=1000,gid=1000 0 0
Use id -u to get the uid and gid.
The long explanation
What I found out is that in an NTFS partition you cannot just change a file permissions with chmod. You require to configure the mask while mounting the partition. Since I mount the partition upon booting Ubuntu, the required change was in my fstab file. The umask parameter determines which permissions cannot be set. You can find more information about this parameter here.
After you do this reboot. You will find that the files have a different permission. In my case the permissions that allowed my deployed code to work were
-rwxrwxr-x 1 user group 59 jul 24 00:47 handler.py*
I am sure there are issues by allowing others to execute the file. But this solved the issue.
Another cause for the exact same error:
[ERROR] PermissionError: [Errno 13] Permission denied: '/var/task/<my_lambda_python_file>.py'
Traceback (most recent call last):
File "/var/lang/lib/python3.8/imp.py", line 300, in find_module
with open(file_path, 'rb') as file:
I was deploying the Lambda using Atlassian Bamboo, and Bamboo seemed to be messing with the permissions of the files that make up the lambda:
-rw-r-----# 1 <user> <group> 2.5K 19 Nov 18:10 <my_lambda_python_file>.py
I worked around this problem by adding to the Bamboo script:
chmod 755 <my_lambda_python_file>.py
just before bundling the code into a zip file.

Slow upload speed in aws deploy push command

I am trying to use AWS CodeDeploy. I use aws deploy push --debug command. The file to be uploaded is around 250 KB. But upload doesn't finish. Following is the logs displayed.
2017-10-27 11:11:40,601 - MainThread - botocore.auth - DEBUG - CanonicalRequest:
PUT
/frontend-deployer/business-services-0.0.1-SNAPSHOT-classes.jar
partNumber=39&uploadId=.olvaJkxreDZf1ObaHCMtHmkQ5DFE.uZ9Om0sxZB08YG3tqRWBxmGLTFWSYQaj9mHl26LPJk..Stv_vPB5NMaV.zAqsYX6fZz_S3.uN5J4FlxHZFXoeTkMiBSYQB2C.g
content-md5:EDXgvJ8Tt5tHYZ6Nkh7epg==
host:s3.us-east-2.amazonaws.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20171027T081140Z
content-md5;host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD
...
2017-10-27 11:12:12,035 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [PUT]>
2017-10-27 11:12:12,035 - MainThread - botocore.awsrequest - DEBUG - Waiting for 100 Continue response.
2017-10-27 11:12:12,189 - MainThread - botocore.awsrequest - DEBUG - 100 Continue response seen, now sending request body.
Even though the file is fairly small (250 KB), upload doesn't finish.
On the other hand, upload via aws s3 cp command lasts 1 second.
How can I increase the upload speed in aws deploy push command?

Chef aws client

I don't quite figure out how to use this aws cookbook. My goal is to download some file from my s3 bucket. According to documentation, I've set this content in my recipe:
aws = data_bag_item('aws', 'dev')
aws_s3_file '/tmp/authz.war' do
bucket 'living-artifacts-dev'
remote_path '/authz/authz.war'
aws_access_key aws['aws_access_key_id']
aws_secret_access_key aws['aws_secret_access_key']
region 'eu-central-1'
end
All values are populated correctly and I've also tried to test them using aws-cli. Nevertheless, chef client is getting this message:
=========================================================================
Error executing action `create` on resource 'aws_s3_file[/tmp/authz.war]'
=========================================================================
Net::HTTPServerException
------------------------
remote_file[/tmp/authz.war] (/var/chef/cache/cookbooks/aws/providers/s3_file.rb line 40) had an error: Net::HTTPServerException: 403 "Forbidden"
How could I debug this?
EDIT
I've tested it using aws command client. I've firstly set credentials using aws configure and I've provided requested values. So, this command:
aws s3 cp s3://living-artifacts-dev/authz/authz.war authz.war
is correctly performed and file is downloaded.
EDIT
More detailed error message:
==> default: * aws_s3_file[/tmp/authz.war] action create
==> default:
==> default: * chef_gem[aws-sdk] action install
==> default: [2017-03-03T11:25:16+00:00] INFO: chef_gem[aws-sdk] installed aws-sdk at ~> 2.2
==> default:
==> default: - install version ~> 2.2 of package aws-sdk
==> default: [2017-03-03T11:25:16+00:00] INFO: Remote and local files do not match, running create operation.
==> default: * chef_gem[aws-sdk] action install (up to date)
==> default: * remote_file[/tmp/authz.war] action create
==> default: [2017-03-03T11:25:16+00:00] INFO: HTTP Request Returned 403 Forbidden:
==> default: [2017-03-03T11:25:16+00:00] WARN: remote_file[/tmp/authz.war] cannot be downloaded from https://living-artifacts-dev.s3.e
u-central-1.amazonaws.com/authz/authz.war?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=sFo6JjohgYi%2BYi4Ut7pTy9EGVDCG89IROX%2Bw7E
RR%2F20170303%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20170303T112516Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signatur
e=f3c2b371ad4e1fe24745459adf0463c708e0363a139b598b04e40789c43ded7d: 403 "Forbidden"
Remove the first slash from remote_path '/authz/authz.war'
Here is the example from the AWS cookbook documentation:
aws_s3_file '/tmp/foo' do
bucket 'i_haz_an_s3_buckit'
remote_path 'path/in/s3/bukket/to/foo'
aws_access_key aws['aws_access_key_id']
aws_secret_access_key aws['aws_secret_access_key']
region 'us-west-1'
end
You have a forbidden error
403 "Forbidden"
You need to ensure your system if on AWS has an appropriate IAM policy attached to it which has at least READ on the bucket and specifically the file you need.

Kubernetes 1.4.3 spin up script stays in loop forever on AWS

When I run cluster/kube-up.sh it loops on Waiting for cluster initialization. I was tried to spin up a cluster in us-west-1, us-west-2, eu-west-1 several times and no success.
There is output from a startup script:
$ export KUBE_AWS_ZONE=eu-west-1a
$ export NUM_NODES=3
$ export KUBE_AWS_INSTANCE_PREFIX=test
$ export MASTER_SIZE=m3.medium
$ export NODE_SIZE=t2.medium
$ export KUBERNETES_PROVIDER=aws
$ ./cluster/kube-up.sh
... Starting cluster in eu-west-1a using provider aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: jessie
Uploading to Amazon S3
Creating kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586
make_bucket: s3://kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/
Confirming bucket was created...
+++ Staging server tars to S3 Storage: kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel
upload: ../../tmp/kubernetes.Bj5OaA/s3/bootstrap-script to s3://kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel/bootstrap-script
upload: ../../tmp/kubernetes.Bj5OaA/s3/kubernetes-salt.tar.gz to s3://kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel/kubernetes-salt.tar.gz
upload: ../../tmp/kubernetes.Bj5OaA/s3/kubernetes-server-linux-amd64.tar.gz to s3://kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel/kubernetes-server-linux-amd64.tar.gz
Uploaded server tars:
SERVER_BINARY_TAR_URL: https://s3.amazonaws.com/kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel/kubernetes-server-linux-amd64.tar.gz
SALT_TAR_URL: https://s3.amazonaws.com/kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel/kubernetes-salt.tar.gz
BOOTSTRAP_SCRIPT_URL: https://s3.amazonaws.com/kubernetes-staging-17d502113db4ff6c4fb9c4b42955c586/devel/bootstrap-script
INSTANCEPROFILE arn:aws:iam::333659885792:instance-profile/kubernetes-master 2016-07-28T10:52:30Z AIPAJ2ESPVLTF7USVISQI kubernetes-master /
ROLES arn:aws:iam::333659885792:role/kubernetes-master 2016-07-28T10:52:29Z / AROAJGF5WAAV5OFWYKBHW kubernetes-master
ASSUMEROLEPOLICYDOCUMENT 2012-10-17
STATEMENT sts:AssumeRole Allow
PRINCIPAL ec2.amazonaws.com
INSTANCEPROFILE arn:aws:iam::333659885792:instance-profile/kubernetes-minion 2016-08-04T08:41:10Z AIPAJYJLZTINGNI4RFBLY kubernetes-minion /
ROLES arn:aws:iam::333659885792:role/kubernetes-minion 2016-08-04T08:41:10Z / AROAIWUBQVYHYHTSSEH6C kubernetes-minion
ASSUMEROLEPOLICYDOCUMENT 2012-10-17
STATEMENT sts:AssumeRole Allow
PRINCIPAL ec2.amazonaws.com
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/kube_aws_rsa.
Your public key has been saved in /root/.ssh/kube_aws_rsa.pub.
The key fingerprint is:
25:a6:8e:4f:79:2f:75:bf:55:6e:68:c6:7a:35:28:a9 root#ip-172-31-12-179
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| o . |
| o o |
| . S . . .|
| o . .o.o +o|
| . + ......=.=|
| o ..E +oo |
| . .. .... |
+-----------------+
Using SSH key with (AWS) fingerprint: 25:a6:8e:4f:79:2f:75:bf:55:6e:68:c6:7a:35:28:a9
Using VPC vpc-55137031
Adding tag to dopt-c7f311a3: Name=kubernetes-dhcp-option-set
Adding tag to dopt-c7f311a3: KubernetesCluster=test
Using DHCP option set dopt-c7f311a3
Using existing subnet with CIDR 172.20.0.0/24
Using subnet subnet-0a43046e
Creating Internet Gateway.
Using Internet Gateway igw-9dc42bf9
Associating route table.
Creating route table
Adding tag to rtb-da9cb4be: KubernetesCluster=test
Associating route table rtb-da9cb4be to subnet subnet-0a43046e
Adding route to route table rtb-da9cb4be
Using Route Table rtb-da9cb4be
Creating master security group.
Creating security group kubernetes-master-test.
Adding tag to sg-80b072e6: KubernetesCluster=test
Creating minion security group.
Creating security group kubernetes-minion-test.
Adding tag to sg-8cb072ea: KubernetesCluster=test
Using master security group: kubernetes-master-test sg-80b072e6
Using minion security group: kubernetes-minion-test sg-8cb072ea
Creating master disk: size 20GB, type gp2
Adding tag to vol-1f19bb9d: Name=test-master-pd
Adding tag to vol-1f19bb9d: KubernetesCluster=test
Allocated Elastic IP for master: 52.49.10.199
Adding tag to vol-1f19bb9d: kubernetes.io/master-ip=52.49.10.199
Generating certs for alternate-names: IP:52.49.10.199,IP:172.20.0.9,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:test-master
Starting Master
Adding tag to i-ac1ca727: Name=test-master
Adding tag to i-ac1ca727: Role=test-master
Adding tag to i-ac1ca727: KubernetesCluster=test
Waiting for master to be ready
Attempt 1 to check for master nodeWaiting for instance i-ac1ca727 to be running (currently pending)
Sleeping for 3 seconds...
Waiting for instance i-ac1ca727 to be running (currently pending)
Sleeping for 3 seconds...
Waiting for instance i-ac1ca727 to be running (currently pending)
Sleeping for 3 seconds...
[master running]
Attaching IP 52.49.10.199 to instance i-ac1ca727
Attaching persistent data volume (vol-1f19bb9d) to master
2016-10-19T09:41:18.422Z /dev/sdb i-ac1ca727 attaching vol-1f19bb9d
cluster "aws_test" set.
user "aws_test" set.
context "aws_test" set.
switched to context "aws_test".
user "aws_test-basic-auth" set.
Wrote config for aws_test to /root/.kube/config
Creating minion configuration
Creating autoscaling group
0 minions started; waiting
0 minions started; waiting
0 minions started; waiting
0 minions started; waiting
3 minions started; ready
Waiting for cluster initialization.
This will continually check to see if the API for kubernetes is reachable.
This might loop forever if there was some uncaught error during start
up.
..............................................................................................................................................................................................^C
$ ./cluster/kubectl.sh version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.3", GitCommit:"4957b090e9a4f6a68b4a40375408fdc74a212260", GitTreeState:"clean", BuildDate:"2016-10-16T06:36:33Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Some info from master node:
$ ps -ef | grep kube
root 620 618 0 10:00 pts/0 00:00:00 grep kube
$ cat /var/log/kube-apiserver.log
cat: /var/log/kube-apiserver.log: No such file or directory
$ cat /var/log/cloud-init.log | grep -i error
$
I had similar problem, it was related to DNS server IP improperly allocated - in my case saltstack (framework used to provision master services) broke upon resolving local hostnames during master spin-up.
Your startup parameters seem to be rather inoccuous, so I don't really have any particular ideas about your case. But maybe checking the same logs for some similar error messages would help - so that at least you could file a faster-fixable issue for the k8s team.
I suggest to double check /var/log/syslog for something similar to the below and if you find such sign of saltstack provisioning issue(s) then go backward timewise from that point and search for something looking abnormal to you.
Sep 27 13:03:36 ip-172-40-0-9 rc.local[374]: -------------
Sep 27 13:03:36 ip-172-40-0-9 rc.local[374]: Succeeded: 89
Sep 27 13:03:36 ip-172-40-0-9 rc.local[374]: Failed: 6
Sep 27 13:03:36 ip-172-40-0-9 rc.local[374]: -------------
Sep 27 13:03:36 ip-172-40-0-9 rc.local[374]: Total: 95
Here is fully detailed issue I filed: https://github.com/kubernetes/kubernetes/issues/33559 - maybe some further details would help (not likely though).

AWS Code deploy, script failed with exit code 66

I'm using AWS Code Deploy in order to deploy my ASP.NET application into an auto-scaling group.
When deploying i've this error: Script at specified location: application-start.bat failed with exit code 66.
From what i've seen the error code 66 is "The network resource type is not correct", which is very bizzare in this case...
My bundle contains an appspec.yml file like this:
version: 0.0
os: windows
files:
- source: ./
destination: c:\inetpub\wwwroot
hooks:
ApplicationStop:
- location: application-stop.bat
timeout: 900
ApplicationStart :
- location: application-start.bat
timeout: 900
And the bat 2 files (application-stop / application-start) only contains one line each
iisreset /stop
iisreset /start
When i go to the EC2 instance to look at the aws code deploy logs, it's not more clear to me
2016-04-04 08:58:42 ERROR [codedeploy-agent(2848)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Error during perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location: application-start.bat failed with exit code 66
C:/Windows/TEMP/ocr512.tmp/src/lib/instance_agent/plugins/codedeploy/hook_executor.rb:150:in 'execute_script'
C:/Windows/TEMP/ocr512.tmp/src/lib/instance_agent/plugins/codedeploy/hook_executor.rb:107:in 'block (2 levels) in execute'
Does anyone run into the same issues and find a way to fix it ?