Serving Amazon S3 Content by Amazon CloudFront - amazon-web-services

I have this code that updates the sql table to the files url´s when it´s backuped to amazon s3.
$sql = "update " . PVS_DB_PREFIX . "filestorage_files set filename1='" . $file .
"',filename2='" . $new_filename . "',url='" . $url[0] .
"',filesize=" . filesize( $publication_path .
"/" . $file ) . ",width=" . $width . ",height=" . $height .
" where id_parent=" .
$rs->row["id"] . " and item_id=" . $items_mass[$file];
$db->execute( $sql );
Then i delete from local server the files that was moved to amazon s3 by:
//delete files from the local server
for ( $i = 0; $i < count( $delete_mass ); $i++ ) {
pvs_delete_files( ( int )$delete_mass[$i], false );
}
Now the files are on the Database with the Amazon S3 url but, i need it to be served on the front by Amazon CloudFront, so i will need to update da sql table again to update de url´s from the files moved to Amazon S3 by:
//cloud front update url on tumbs preview
$sql = "update " . PVS_DB_PREFIX .
"filestorage_files set url='http://www.cloudfront.com/exmaple' item_id=" .
$items_mass[$file] == 0;
$db->execute( $sql );
But... something here is not working right, can any one help me with this ?
Regard´s

Actually the last part of your question not clear, but you should deploy a Cloudfront and set your S3 bucket as the origin. Finally, you have a unique URL for your published Cloudfront, and you can quickly add your file name after the main URL.

Related

How to Properly Change a Google Kubernetes Engine Node Pool Using Terraform?

I have successfully created a Google Kubernetes Engine (GKE) cluster ($GKE_CLUSTER_NAME) inside of a Google Cloud Platform (GCP) project ($GCP_PROJECT_NAME):
gcloud container clusters list \
--format="value(name)" \
--project=$GCP_PROJECT_NAME
#=>
. . .
$GKE_CLUSTER_NAME
. . .
which uses the node pool $GKE_NODE_POOL:
gcloud container node-pools list \
--cluster=$GKE_CLUSTER_NAME \
--format="value(name)" \
--zone=$GKE_CLUSTER_ZONE
#=>
$GKE_NODE_POOL
I am checking this config. into SCM using Terraform with the following container_node_pool.tf:
resource "google_container_node_pool" ". . ." {
autoscaling {
max_node_count = "3"
min_node_count = "3"
}
. . .
initial_node_count = "3"
. . .
}
and I confirmed that the Terraform configuration above matched $GKE_NODE_POOL running currently inside of $GKE_CLUSTER_NAME and $GCP_PROJECT_NAME:
terraform plan
#=>
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
If I want to make a change to $GKE_NODE_POOL:
resource "google_container_node_pool" ". . ." {
autoscaling {
max_node_count = "4"
min_node_count = "4"
}
. . .
initial_node_count = "4"
. . .
}
and scale the number of nodes in $GKE_NODE_POOL from 3 to 4, I get the following output when trying to plan:
terraform plan
#=>
. . .
Plan: 1 to add, 0 to change, 1 to destroy.
. . .
How can I update $GKE_NODE_POOL without destroying and then recreating the resource?
Changing the initial_node_count argument for any google_container_node_pool will trigger destruction and recreation. Just don't modify initial_node_count and you should be able to modify $GKE_NODE_POOL arguments such as min_node_count and max_node_count.
The output of the plan command should explicitly show you which argument causes destruction and recreation behavior [in red]:
terraform plan
. . .
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# google_container_node_pool.$GKE_NODE_POOL must be replaced
-/+ resource "google_container_node_pool" ". . ." {
. . .
~ initial_node_count = 3 -> 4 # forces replacement
. . .
Plan: 1 to add, 0 to change, 1 to destroy.
. . .
The initial_node_count argument seems to be the only argument for google_container_node_pool that causes this behavior; the initial_node_count argument also appears to be optional.
You can read this warning in the official documentation here.

Download Last 24 hour files from s3 using Powershell

I have an s3 bucket with different filenames. I need to download specific files (filenames that starts with impression) that are created or modified in last 24 hours from s3 bucket to local folder using powershell?
$items = Get-S3Object -BucketName $sourceBucket -ProfileName $profile -Region 'us-east-1' | Sort-Object LastModified -Descending | Select-Object -First 1 | select Key Write-Host "$($items.Length) objects to copy" $index = 1 $items | % { Write-Host "$index/$($items.Length): $($_.Key)" $fileName = $Folder + ".\$($_.Key.Replace('/','\'))" Write-Host "$fileName" Read-S3Object -BucketName $sourceBucket -Key $_.Key -File $fileName -ProfileName $profile -Region 'us-east-1' > $null $index += 1 }
A workaround might be to turn on access log, and since the access log will contain timestamp, you can get all access logs in the past 24 hours, de-duplicate repeated S3 objects, then download them all.
You can enable S3 access log in the bucket settings, the logs will be stored in another bucket.
If you end up writing a script for this, just bear in mind downloading the S3 objects will essentially create new access logs, making the operation irreversible.
If you want something fancy perhaps you can even query the logs and perhaps deduplicate using AWS Athena.

Parse File Migration to AWS

Has anyone had any success migrating files from the Parse S3 Bucket to an S3 Bucket of their own? I have an app that contains many files (images) and I have them serving from both my own S3 Bucket and from the Parse Bucket using the S3 File Adapter but would like to migrate the physical files to my own Bucket on AWS where the app will now be hosted.
Thanks in advance!
If you've configured your new Parse instance to host files with the S3 file adapter, you could write a PHP script that downloads files from Parse S3 Bucket and upload it to your own. In my example (using Parse-PHP-SDK):
I do a loop through every entry.
I download the binary of that file (hosted in Parse)
I upload it as a new ParseFile (if your server is configured for S3, it will be uploaded to S3 bucket of your own).
Apply that new ParseFile to your entry.
Voilà
<?php
require 'vendor/autoload.php';
use Parse\ParseObject;
use Parse\ParseQuery;
use Parse\ParseACL;
use Parse\ParsePush;
use Parse\ParseUser;
use Parse\ParseInstallation;
use Parse\ParseException;
use Parse\ParseAnalytics;
use Parse\ParseFile;
use Parse\ParseCloud;
use Parse\ParseClient;
$app_id = "AAA";
$rest_key = "BBB";
$master_key = "CCC";
ParseClient::initialize( $app_id, $rest_key, $master_key );
ParseClient::setServerURL('http://localhost:1338/','parse');
$query = new ParseQuery("YourClass");
$query->descending("createdAt"); // just because of my preference
$count = $query->count();
for ($i = 0; $i < $count; $i++) {
try {
$query->skip($i);
// get Entry
$entryWithFile = $query->first();
// get file
$parseFile = $entryWithFile->get("file");
// filename
$fileName = $parseFile->getName();
echo "\nFilename #".$i.": ". $fileName;
echo "\nObjectId: ".$entryWithFile->getObjectId();
// if the file is hosted in Parse, do the job, otherwise continue with the next one
if (strpos($fileName, "tfss-") === false) {
echo "\nThis is already an internal file, skipping...";
continue;
}
$newFileName = str_replace("tfss-", "", $fileName);
$binaryFile = file_get_contents($parseFile->getURL());
// null by default, you don't need to specify if you don't want to.
$fileType = "binary/octet-stream";
$newFile = ParseFile::createFromData($binaryFile, $newFileName, $fileType);
$entryWithFile->set("file", $newFile);
$entryWithFile->save(true);
echo "\nFile saved\n";
} catch (Exception $e) {
// The conection with mongo or the server could be off for some second, let's retry it ;)
$i = $i - 1;
sleep(10);
continue;
}
}
echo "\n";
echo "¡FIN!";
?>

ERROR: gcloud.dns.managed-zone.create

I use : google cloud VM that works very well with gcloud command line.
I followed started with Google Cloud DNS .
- Install the Cloud SDK
- Authenticate on the command-line
- create a project
- set api dns
- verify domain ownership .
but when I execute:
gcloud managed dns - area create --dns_name = " mydomaine.com . " --description = " A test zone" mydomainezonename
Creating {' dNSName ' : ' name' ' mydomaine.com . ' ' Examplezone '
'description' : ' mydomaine test area '} in PROJECT
Do you want to continue (Y / n) ? Y
ERROR: ( gcloud.dns.managed - zone.create ) ResponseError : status = 400 = Bad Request code , reason (s) = invalid
message = Invalid value for project: PROJECT
I search on the web but I find no reason on my problem .

AWS SDK php S3 refuses to access bucket name xx.my_domain.com

I want to use AWS S3 to store image files for my website. I create a bucket name images.mydomain.com which was referred by dns cname images.mydomain.com from AWS Route 53.
I want to check whether a folder or file exists; if not I will create one.
The following php codes work fine for regular bucket name using stream wrapper but fails for this type of bucket name such as xxxx.mydomain.com. This kind of bucket name fails in doesObjectExist() method too.
// $new_dir = "s3://aaaa/akak3/kk1/yy3/ww4" ; // this line works !
$new_dir = "s3://images.mydomain.com/us000000/10000" ; // this line fails !
if( !file_exists( $new_dir) ){
if( !mkdir( $new_dir , 0777 , true ) ) {
echo "create new dir $new_dir failed ! <br>" ;
} else {
echo "SUCCEED in creating new dir $new_dir <br" ;
}
} else {
echo "dir $new_dir already exists. Skip creating dir ! <br>" ;
}
I got the following message
Warning: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: "images.mydomain.com.s3.amazonaws.com". in C:\AppServ\www\ecity\vendor\aws\aws-sdk-php\src\Aws\S3\StreamWrapper.php on line 737
What is the problem here?
Any advise on what to do for this case?
Thanks!