Get-S3Object returns keyprefix (folder) itself? - amazon-web-services

I am using S3 Powershell api (Get-S3Object) to retrieve files from S3, the thing is that the api works in a strange way.
I run the following command first:
Get-S3Object -BucketName "tools-bucket" -keyprefix "Rollback/ust1twastool01a"
It returns this list:
ETag : "d41d8cd98f00b204e9800998ecf8427e"
Key : Rollback/ust1twastool01a/
LastModified : 11/7/2016 3:24:13 PM
Owner : Amazon.S3.Model.Owner
Size : 0
StorageClass : STANDARD
ETag : "e0ada177422c1fe4d9bd9801636f4e8a"
Key : Rollback/ust1twastool01a/Rollback_Kit.txt
LastModified : 11/7/2016 3:25:00 PM
Owner : Amazon.S3.Model.Owner
Size : 626
StorageClass : STANDARD
The first one is the keyprefix itself which is a folder. Then I run the command with another keyprefix:
Get-S3Object -BucketName "tools-bucket" -keyprefix "Rollback/autopatch"
It returns this:
ETag : "4c3723148b9fb78d5b182c72aa6f1866-62"
Key : Rollback/autopatch/2016-08-30_21-15-17_server-1.1.20558_client-1.1.20518.zip
LastModified : 8/30/2016 5:18:43 PM
Owner : Amazon.S3.Model.Owner
Size : 323772907
StorageClass : STANDARD
ETag : "bfc65b2cde2c3f24a2086ca503270a54"
Key : Rollback/autopatch/buildRecords.txt
LastModified : 8/30/2016 5:19:44 PM
Owner : Amazon.S3.Model.Owner
Size : 53
StorageClass : STANDARD
This time, the keyprefix is not returned. I don't quite figure it out why it happens

This typically happens when you create a folder in the console. They're placeholders. They aren't needed unless you need to navigate "into" a folder manually to upload objects. Your workaround is to skip objects ending in / that are zero-byte objects.
http://docs.aws.amazon.com/AmazonS3/latest/UG/about-using-console.html#welcome-folder-concept

Related

Unable to change the Regional format for EC2 instance via User-Data script

I am trying to change the EC2 instance's Regional format setting:
to en-GB by running Set-Culture en-GB command in my user-data script:
<powershell>
# set timezone to GMT
tzutil /s "GMT Standard Time"
# set the date format to the UK -> dd/mm/yyyy (we set it at IIS server level which would apply to all IIS apps automatically)
c:\windows\system32\inetsrv\appcmd.exe set config /commit:WEBROOT /section:globalization /culture:en-GB
c:\windows\system32\inetsrv\appcmd.exe set config /commit:WEBROOT /section:globalization /uiCulture:en-GB
# Set Regional format to UK (affects setting in Region Settings/Region/Regional format dropdown menu)
Set-Culture en-GB
</powershell>
<runAsLocalSystem>true</runAsLocalSystem>
However, when the instance boots up and I RDP to it, I find that the date format is still the default (US) - mm/dd/yyyy and not the UK format - dd/mm/yyyy
What am I missing here?
Update1
User-Data execution log:
2022/10/16 16:43:00Z: Begin user data script process.
2022/10/16 16:43:00Z: Unable to parse <persist> tags. This can happen when tags are unmatched or poorly formed.
2022/10/16 16:43:00Z: Sending telemetry bool: IsUserDataScheduledPerBoot
2022/10/16 16:43:00Z: Unregister the scheduled task to persist user data.
2022/10/16 16:43:05Z: Unable to parse <runAsLocalSystem> tags. This can happen when tags are unmatched or poorly formed.
2022/10/16 16:43:05Z: Unable to parse <script> tags. This can happen when tags are unmatched or poorly formed.
2022/10/16 16:43:05Z: Unable to parse <powershellArguments> tags. This can happen when tags are unmatched or poorly formed.
2022/10/16 16:43:05Z: <powershell> tag was provided.. running powershell content
2022/10/16 16:43:12Z: User data script completed.
2022/10/16 17:03:06Z: Begin user data script process.
2022/10/16 17:03:06Z: Failed to get metadata: The result from http://169.254.169.254/latest/user-data was empty
2022/10/16 17:03:11Z: Launch metadata did not include a user data script.
2022/10/16 17:03:11Z: User data script completed.
2022/10/20 08:27:33Z: Begin user data script process.
2022/10/20 08:27:33Z: Failed to get metadata: The result from http://169.254.169.254/latest/user-data was empty
2022/10/20 08:27:38Z: Launch metadata did not include a user data script.
2022/10/20 08:27:38Z: User data script completed.
2022/10/21 09:50:52Z: Begin user data script process.
2022/10/21 09:50:52Z: Unable to parse <persist> tags. This can happen when tags are unmatched or poorly formed.
2022/10/21 09:50:52Z: Sending telemetry bool: IsUserDataScheduledPerBoot
2022/10/21 09:50:52Z: Unregister the scheduled task to persist user data.
2022/10/21 09:50:57Z: <runAsLocalSystem> tag was provided: true
2022/10/21 09:50:57Z: Running user data as local system.
2022/10/21 09:50:57Z: Unable to parse <script> tags. This can happen when tags are unmatched or poorly formed.
2022/10/21 09:50:57Z: Unable to parse <powershellArguments> tags. This can happen when tags are unmatched or poorly formed.
2022/10/21 09:50:57Z: <powershell> tag was provided.. running powershell content
2022/10/21 09:52:36Z: Message: The errors from user data script: New-Item : An item with the specified name C:\temp already exists.
At
C:\Windows\system32\config\systemprofile\AppData\Local\Temp\Amazon\EC2-Windows\Launch\InvokeUserData\UserScript.ps1:96
char:1
+ New-Item -Path "C:\temp" -ItemType Directory
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceExists: (C:\temp:String) [New-Item], IOException
+ FullyQualifiedErrorId : DirectoryExist,Microsoft.PowerShell.Commands.NewItemCommand
2022/10/21 09:52:36Z: Message: The output from user data script: Join Domain User: ENR\svc_enr_domainjoin
ComputerName: EC2AMAZ-K1I0FAF
Reboot Required...
VERBOSE: Performing the operation "Join in domain 'ENR.cloud'" on target "EC2AMAZ-K1I0FAF".
HasSucceeded ComputerName
------------ ------------
True EC2AMAZ-K1I0FAF
WARNING: The changes will take effect after you restart the computer EC2AMAZ-K1I0FAF.
LocalPath : Z:
RemotePath : \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev
RequireIntegrity : False
RequirePrivacy : False
Status : OK
UseWriteThrough : False
PSComputerName :
Backing up IIS config to backup named 20221021-095225-Login
Creating virtual directories on site: Login
... Adding a virtual directory 'AnnouncementImages' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\AnnouncementImages
- Virtual directory 'IIS:\Sites\Login\AnnouncementImages' created sucessfully
... Adding a virtual directory 'apr' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\WebsiteCharts\oldAPRStructure
- Virtual directory 'IIS:\Sites\Login\apr' created sucessfully
... Adding a virtual directory 'AprSchematics' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\WebsiteCharts\apr\schematics
- Virtual directory 'IIS:\Sites\Login\AprSchematics' created sucessfully
... Adding a virtual directory 'BprDiagrams' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\WebsiteCharts\bpr\diagrams
- Virtual directory 'IIS:\Sites\Login\BprDiagrams' created sucessfully
... Adding a virtual directory 'Charts' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\WebsiteCharts
- Virtual directory 'IIS:\Sites\Login\Charts' created sucessfully
... Adding a virtual directory 'dls' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\dls
- Virtual directory 'IIS:\Sites\Login\dls' created sucessfully
... Adding a virtual directory 'Pre00CprData' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\Pre2000\cpr
- Virtual directory 'IIS:\Sites\Login\Pre00CprData' created sucessfully
... Adding a virtual directory 'Pre00DprData' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\Pre2000\dpr
- Virtual directory 'IIS:\Sites\Login\Pre00DprData' created sucessfully
... Adding a virtual directory 'Pre15BprData' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\Pre2000\bpr
- Virtual directory 'IIS:\Sites\Login\Pre15BprData' created sucessfully
... Adding a virtual directory 'WBook' for physical path \\myTeamreview_eu_dev.enr.ihsenergy.com\myTeamreview_eu_dev\Archive\Barra\live_content\wbook
- Virtual directory 'IIS:\Sites\Login\WBook' created sucessfully
Applied configuration changes to section "system.web/globalization" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT"
Applied configuration changes to section "system.web/globalization" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT"
PSPath : Microsoft.PowerShell.Security\Certificate::LocalMachine\my\0F006BBA30488C454380998CD818B7933
CBABB7F
PSParentPath : Microsoft.PowerShell.Security\Certificate::LocalMachine\my
PSChildName : 0F006BBA30488C454380998CD818B7933CBABB7F
PSIsContainer : False
Archived : False
Extensions : {System.Security.Cryptography.Oid, System.Security.Cryptography.Oid,
System.Security.Cryptography.Oid, System.Security.Cryptography.Oid}
FriendlyName : myTeamId
IssuerName : System.Security.Cryptography.X509Certificates.X500DistinguishedName
NotAfter : 10/21/2023 10:02:35 AM
NotBefore : 10/21/2022 9:42:35 AM
HasPrivateKey : True
PrivateKey :
PublicKey : System.Security.Cryptography.X509Certificates.PublicKey
RawData : {48, 130, 3, 42...}
SerialNumber : 3A249AE70AC9BC96464EDE49BEDD319B
SubjectName : System.Security.Cryptography.X509Certificates.X500DistinguishedName
SignatureAlgorithm : System.Security.Cryptography.Oid
Thumbprint : 0F006BBA30488C454380998CD818B7933CBABB7F
Version : 3
Handle : 1568489766128
Issuer : CN=EC2AMAZ-K1I0FAF
Subject : CN=EC2AMAZ-K1I0FAF
EnhancedKeyUsageList : {Client Authentication (1.3.6.1.5.5.7.3.2), Server Authentication (1.3.6.1.5.5.7.3.1)}
DnsNameList : {EC2AMAZ-K1I0FAF}
SendAsTrustedIssuer : False
EnrollmentPolicyEndPoint : Microsoft.CertificateServices.Commands.EnrollmentEndPointProperty
EnrollmentServerEndPoint : Microsoft.CertificateServices.Commands.EnrollmentEndPointProperty
PolicyId :
2022/10/21 09:52:36Z: User data script completed.

GoogleStorageException - 401 Unauthorized / Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket

I want to transfer data from GCS to BigQuery by embulk and digdag.
But error occurs.
com.google.api.client.googleapis.json.GoogleJsonResponseException: 401 Unauthorized
.......
Error: org.embulk.config.ConfigException: com.google.cloud.storage.StorageException: Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
↓ Details
command :
embulk run XXXX.yaml
XXXX.yaml :
in:
type: gcs
bucket: <bucket name>
path_prefix: <file path>
auth_method: compute_engine
parser:
type: poi_excel
sheets: <sheet name>
skip_header_lines: 4
columns:
- {name: 'name', type: string}
.
.
.
out:
type: bigquery
mode: replace
project: <project name>
dataset: <dataset name>
table: <table name>
auth_method: compute_engine
schema_file: <file name of json type>
gcs_bucket: <gcs tmp bucket name>
output :
$ embulk run target_item_bottoms_config.yaml
2020-07-22 14:27:36.559 +0900: Embulk v0.9.23
2020-07-22 14:27:37.609 +0900 [WARN] (main): DEPRECATION: JRuby org.jruby.embed.ScriptingContainer is directly injected.
2020-07-22 14:27:40.577 +0900 [INFO] (main): Gem's home and path are set by default: "/Users/oniki/.embulk/lib/gems"
2020-07-22 14:27:41.662 +0900 [INFO] (main): Started Embulk v0.9.23
2020-07-22 14:27:41.853 +0900 [INFO] (0001:transaction): Loaded plugin embulk-input-gcs (0.3.2)
2020-07-22 14:27:46.263 +0900 [INFO] (0001:transaction): Loaded plugin embulk-output-bigquery (0.6.4)
2020-07-22 14:27:46.369 +0900 [INFO] (0001:transaction): Loaded plugin embulk-parser-poi_excel (0.1.7)
org.embulk.exec.PartialExecutionException: org.embulk.config.ConfigException: com.google.cloud.storage.StorageException: Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
at org.embulk.exec.BulkLoader$LoaderState.buildPartialExecuteException(BulkLoader.java:340)
at org.embulk.exec.BulkLoader.doRun(BulkLoader.java:566)
at org.embulk.exec.BulkLoader.access$000(BulkLoader.java:35)
at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:353)
at org.embulk.exec.BulkLoader$1.run(BulkLoader.java:350)
at org.embulk.spi.Exec.doWith(Exec.java:22)
at org.embulk.exec.BulkLoader.run(BulkLoader.java:350)
at org.embulk.EmbulkEmbed.run(EmbulkEmbed.java:242)
at org.embulk.EmbulkRunner.runInternal(EmbulkRunner.java:291)
at org.embulk.EmbulkRunner.run(EmbulkRunner.java:155)
at org.embulk.cli.EmbulkRun.runSubcommand(EmbulkRun.java:431)
at org.embulk.cli.EmbulkRun.run(EmbulkRun.java:90)
at org.embulk.cli.Main.main(Main.java:64)
Suppressed: java.lang.NullPointerException
at org.embulk.exec.BulkLoader.doCleanup(BulkLoader.java:463)
at org.embulk.exec.BulkLoader$3.run(BulkLoader.java:397)
at org.embulk.exec.BulkLoader$3.run(BulkLoader.java:394)
at org.embulk.spi.Exec.doWith(Exec.java:22)
at org.embulk.exec.BulkLoader.cleanup(BulkLoader.java:394)
at org.embulk.EmbulkEmbed.run(EmbulkEmbed.java:245)
... 5 more
Caused by: org.embulk.config.ConfigException: com.google.cloud.storage.StorageException: Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
at org.embulk.input.gcs.AuthUtils.newClient(AuthUtils.java:81)
at org.embulk.input.gcs.GcsFileInput.listFiles(GcsFileInput.java:49)
at org.embulk.input.gcs.GcsFileInputPlugin.transaction(GcsFileInputPlugin.java:59)
at org.embulk.spi.FileInputRunner.transaction(FileInputRunner.java:62)
at org.embulk.exec.BulkLoader.doRun(BulkLoader.java:507)
... 11 more
Caused by: com.google.cloud.storage.StorageException: Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:226)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:366)
at com.google.cloud.storage.StorageImpl$8.call(StorageImpl.java:338)
at com.google.cloud.storage.StorageImpl$8.call(StorageImpl.java:335)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.storage.StorageImpl.listBlobs(StorageImpl.java:334)
at com.google.cloud.storage.StorageImpl.list(StorageImpl.java:290)
at org.embulk.input.gcs.AuthUtils.newClient(AuthUtils.java:77)
... 15 more
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 401 Unauthorized
{
"code" : 401,
"errors" : [ {
"domain" : "global",
"location" : "Authorization",
"locationType" : "header",
"message" : "Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.",
"reason" : "required"
} ],
"message" : "Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket."
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:401)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1097)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:499)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:549)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:356)
... 23 more
Error: org.embulk.config.ConfigException: com.google.cloud.storage.StorageException: Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
my environment :
$ gcloud config list
[compute]
region = us-east1
zone = us-east1-c
[core]
account = myname#xxx.com
disable_usage_reporting = False
project = <project ID>
Your active configuration is: [default]
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* myname#xxxx.com
To set the active account, run:
$ gcloud config set account `ACCOUNT`
$ gsutil ls
gs://<bucket name>
my gcp IAM role :
owner
I understand that the solution to this error is authorization.
But my preferences seem to be fine.
what's wrong?
As the documentation [1], if we have 401- Unauthorized error then there could be many reasons, please have a related list of reasons listed below [followed the link 1], which could be helpful for troubleshooting:
Reason:AuthenticationRequiredRequesterPays
Access to a Requester Pays bucket requires authentication.
Reason: authError
This error indicates a problem with the authorization provided in the request to Cloud Storage. The following are some situations where that will occur:
The OAuth access token has expired and needs to be refreshed. This can be avoided by refreshing the access token early, but code can also catch this error, refresh the token and retry automatically.
Multiple non-matching authorizations were provided; choose one mode only.
The OAuth access token's bound project does not match the project associated with the provided developer key.
The Authorization header was of an unrecognized format or uses an unsupported credential type.
reason:lockedDomainExpired
When downloading content from a cookie-authenticated site, e.g., using the Storage Browser, the response will redirect to a temporary domain. This error will occur if access to said domain occurs after the domain expires. Issue the original request again, and receive a new redirect.
Reason: push.webhookUrlUnauthorized
Requests to storage.objects.watchAll will fail unless you verify you own the domain.
Reason: required
Access to a non-public method that requires authorization was made, but none was provided in the Authorization header or through other means.
[1] https://cloud.google.com/storage/docs/json_api/v1/status-codes#401_Unauthorized
I try locally , and create Service Account Key and save at local .
◾️XXXX.yaml
before
auth_method: compute_engine
after
auth_method: json_key
json_keyfile: /path/to/json_keyfile.json

i need to store the jobids of multiple aws glacier initiate-job requests from cli

I have the following lines but on loop with different archive ids. AWS documentation does not have any parameters I can think of that would be able to catch the message it sends back.
Following is the line that I call inside a loop
aws glacier initiate-job --account-id 9999999999 --vault-name vaultee --job-parameters file://params.json
I was wondering if it's possible to catch the output so i can parse it and get the job id of the request. that way when I need to check the status i can just loop through all the job ids
example output
[{
"JobId" : "lskdjfoksdjfa;lkjlk3j2lk24j",
"ArchiveId": "salskdjflksdjfklsdfas324234",
"Date" : "date"
},
{
"JobId" : "5468726w8f465wdf",
"ArchiveId": "sdf3243523432",
"Date" : "date"
},
:
:
{
"JobId" : "kjhdfkjhasdjkfhsakjdfs",
"ArchiveId": "78678fgdfgsedf",
"Date" : "date"
}]
You can use --query and --output parameters to filter out the IDs.
In your case, the command could be something in the following form (writing from memory, so can't confirm):
job_ids=$(aws glacier initiate-job \
--account-id 9999999999 \
--vault-name vaultee \
--job-parameters file://params.json
--query '[*].JobId' --output text)
echo ${job_ids}
and then you can iterate over the ${job_ids} in bash checking for their status using different aws cli command.

AWS Cloudformation issue installing custom binaries

I'm trying to install a custom compiled package that I have in S3 as a zip file. I added this on my Cloudformation template:
"sources" : {
"/opt" : "https://s3.amazonaws.com/mybucket/installers/myapp-3.2.1.zip"
},
It downloads and unzip it on /opt without issues, but all the "executables" files don't have the "x" permission. I mean "-rw-r--r-- 1 root root 220378 Dec 4 18:23 myapp".
If I download the zip and unzip it in any directory, the permissions are Ok.
I already read the Cloudformation documentation and there is no clue there.
Someone can help me figuring this out? Thanks in advance.
Maybe you can combine a "configSets" (to guarantee the execution order) and a "command" element to write something like :
"AWS::CloudFormation::Init" : {
"configSets" : {
"default" : [ "download", "fixPermissions" ]
},
"download" : {
"sources" : {
"/opt" : "https://s3.amazonaws.com/mybucket/installers/myapp-3.2.1.zip"
},
},
"fixPermissions" : {
"commands" : {
"fixMyAppPermissions" : {
"command" : "chmod +x /opt/myapp-3.2.1/myapp"
}
}
}
}
Source :
https://s3.amazonaws.com/cloudformation-examples/BoostrappingApplicationsWithAWSCloudFormation.pdf
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html

MongoDB GridFS services on cloudfoundry

we use MongoDB GridFS pluging to store upload file, it work
But we can upload the beginning, usually can not be upload to upload 8M ?
check status in mongodb, they create two collection
db.fs.chunks
db.fs.files
type command
> db.fs.chunks.stats()
{
"ns" : "db.fs.chunks",
"count" : 376,
"size" : 84212168,
"avgObjSize" : 223968.53191489363,
"storageSize" : 84250624,
"numExtents" : 8,
"nindexes" : 2,
"lastExtentSize" : 20594688,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 49056,
"indexSizes" : {
"id" : 24528,
"files_id_1_n_1" : 24528
},
"ok" : 1
}
storageSize is there a limit ?
thank all to help
Todd
the following storage limits are in place on CloudFoundry.com;
mysql: 128MB
redis: 16MB
mongo: 240MB
It may be that the connection is timing out when actually uploading the data, what actually happens when you are trying to perform the upload?