Has anyone had any success migrating files from the Parse S3 Bucket to an S3 Bucket of their own? I have an app that contains many files (images) and I have them serving from both my own S3 Bucket and from the Parse Bucket using the S3 File Adapter but would like to migrate the physical files to my own Bucket on AWS where the app will now be hosted.
Thanks in advance!
If you've configured your new Parse instance to host files with the S3 file adapter, you could write a PHP script that downloads files from Parse S3 Bucket and upload it to your own. In my example (using Parse-PHP-SDK):
I do a loop through every entry.
I download the binary of that file (hosted in Parse)
I upload it as a new ParseFile (if your server is configured for S3, it will be uploaded to S3 bucket of your own).
Apply that new ParseFile to your entry.
Voilà
<?php
require 'vendor/autoload.php';
use Parse\ParseObject;
use Parse\ParseQuery;
use Parse\ParseACL;
use Parse\ParsePush;
use Parse\ParseUser;
use Parse\ParseInstallation;
use Parse\ParseException;
use Parse\ParseAnalytics;
use Parse\ParseFile;
use Parse\ParseCloud;
use Parse\ParseClient;
$app_id = "AAA";
$rest_key = "BBB";
$master_key = "CCC";
ParseClient::initialize( $app_id, $rest_key, $master_key );
ParseClient::setServerURL('http://localhost:1338/','parse');
$query = new ParseQuery("YourClass");
$query->descending("createdAt"); // just because of my preference
$count = $query->count();
for ($i = 0; $i < $count; $i++) {
try {
$query->skip($i);
// get Entry
$entryWithFile = $query->first();
// get file
$parseFile = $entryWithFile->get("file");
// filename
$fileName = $parseFile->getName();
echo "\nFilename #".$i.": ". $fileName;
echo "\nObjectId: ".$entryWithFile->getObjectId();
// if the file is hosted in Parse, do the job, otherwise continue with the next one
if (strpos($fileName, "tfss-") === false) {
echo "\nThis is already an internal file, skipping...";
continue;
}
$newFileName = str_replace("tfss-", "", $fileName);
$binaryFile = file_get_contents($parseFile->getURL());
// null by default, you don't need to specify if you don't want to.
$fileType = "binary/octet-stream";
$newFile = ParseFile::createFromData($binaryFile, $newFileName, $fileType);
$entryWithFile->set("file", $newFile);
$entryWithFile->save(true);
echo "\nFile saved\n";
} catch (Exception $e) {
// The conection with mongo or the server could be off for some second, let's retry it ;)
$i = $i - 1;
sleep(10);
continue;
}
}
echo "\n";
echo "¡FIN!";
?>
Related
We would like to transfer the development via azure devops to another company and we ask ourselves whether not only new releases can be pushed through this pipeline. But also data could be downloaded from the productive environment via the azure devops or aws devops pipeline?
I researched myself but found nothing about it.
does any of you have more information on this?
Thank you
Is it possible to download files/data during the build pipeline on Azure DevOps?
In Azure DevOps, there isn't a task to download files/data, but you can use the PowerShell task to connect to FTP server and download files.
For detailed information, you can refer to this similar question.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
#FTP Server Information - SET VARIABLES
$ftp = "ftp://XXX.com/"
$user = 'UserName'
$pass = 'Password'
$folder = 'FTP_Folder'
$target = "C:\Folder\Folder1\"
#SET CREDENTIALS
$credentials = new-object System.Net.NetworkCredential($user, $pass)
function Get-FtpDir ($url,$credentials) {
$request = [Net.WebRequest]::Create($url)
$request.Method = [System.Net.WebRequestMethods+FTP]::ListDirectory
if ($credentials) { $request.Credentials = $credentials }
$response = $request.GetResponse()
$reader = New-Object IO.StreamReader $response.GetResponseStream()
while(-not $reader.EndOfStream) {
$reader.ReadLine()
}
#$reader.ReadToEnd()
$reader.Close()
$response.Close()
}
#SET FOLDER PATH
$folderPath= $ftp + "/" + $folder + "/"
$files = Get-FTPDir -url $folderPath -credentials $credentials
$files
$webclient = New-Object System.Net.WebClient
$webclient.Credentials = New-Object System.Net.NetworkCredential($user,$pass)
$counter = 0
foreach ($file in ($files | where {$_ -like "*.txt"})){
$source=$folderPath + $file
$destination = $target + $file
$webclient.DownloadFile($source, $target+$file)
#PRINT FILE NAME AND COUNTER
$counter++
$counter
$source
}
Certificate comes from: PowerShell Connect to FTP server and get files.
You should use artifacts when it is inside your "environment".
Otherwise you can use the normal command line tools like git or curl and wget this depends on your build agent.
We have previously generated a list of Google's API end-points utilised by the SDK by grepping the source repo. Now that that doesn't seem to be available, has anyone else found a way of obtaining such a list? We need to be able to whitelist these end-points on our corporate firewall/proxy.
Thanks!
PART 1
If your objective is to whitelist URLs for your firewall, the URL *.googleapis.com will cover 99% of everything you need. There are only a few endpoints left:
bookstore.endpoints.endpoints-portal-demo.cloud.goog
cloudvolumesgcp-api.netapp.com
echo-api.endpoints.endpoints-portal-demo.cloud.goog
elasticsearch-service.gcpmarketplace.elastic.co
gcp.redisenterprise.com
payg-prod.gcpmarketplace.confluent.cloud
prod.cloud.datastax.com
PART 2
List the Google API endpoints that are available for your project with this command:
gcloud services list --available --format json | jq -r ".[].config.name"
https://cloud.google.com/sdk/gcloud/reference/services/list
Refer to PART 5 for a PowerShell script that produces a similar list.
PART 3
Process the Discovery Document which provides machine readable information:
Google API Discovery Service
curl https://www.googleapis.com/discovery/v1/apis | jq -r ".items[].discoveryRestUrl"
Once you have a list of discovery documents, process each document and extract the rootUrl key.
curl https://youtubereporting.googleapis.com/$discovery/rest?version=v1 | jq -r ".rootUrl"
PART 4
PowerShell script to process the Discovery Document and generate an API endpoint list:
Copy this code to a file named list_google_apis.ps1. Run the command as follows:
powershell ".\list_google_apis.ps1 | Sort-Object -Unique | Out-File -Encoding ASCII -FilePath apilist.txt"
There will be some errors displayed as some of the discovery document URLs result in 404 (NOT FOUND) errors.
$url_discovery = "https://www.googleapis.com/discovery/v1/apis"
$params = #{
Uri = $url_discovery
ContentType = 'application/json'
}
$r = Invoke-RestMethod #params
foreach($item in $r.items) {
$url = $item.discoveryRestUrl
try {
$p = #{
Uri = $url
ContentType = 'application/json'
}
$doc = Invoke-RestMethod #p
$doc.rootUrl
} catch {
Write-Host "Failed:" $url -ForegroundColor Red
}
}
PART 5
PowerShell script that I wrote a while back that produces similar output to gcloud services list.
Documentation for the API:
https://cloud.google.com/service-usage/docs/reference/rest/v1/services/list
<#
.SYNOPSIS
This program displays a list of Google Cloud services
.DESCRIPTION
Google Service Management allows service producers to publish their services on
Google Cloud Platform so that they can be discovered and used by service consumers.
.NOTES
This program requires the Google Cloud SDK CLI is installed and set up.
https://cloud.google.com/sdk/docs/quickstarts
.LINK
PowerShell Invoke-RestMethod
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-5.1
Google Cloud CLI print-access-token Documentation
https://cloud.google.com/sdk/gcloud/reference/auth/print-access-token
Google Cloud API Documentation
https://cloud.google.com/service-infrastructure/docs/service-management/reference/rest
https://cloud.google.com/service-usage/docs/reference/rest/v1/services
https://cloud.google.com/service-infrastructure/docs/service-management/reference/rest/v1/services/list
#>
function Get-AccessToken {
# Get an OAuth Access Token
$accessToken=gcloud auth print-access-token
return $accessToken
}
function Display-ServiceTable {
Param([array][Parameter(Position = 0, Mandatory = $true)] $serviceList)
if ($serviceList.Count -lt 1) {
Write-Output "No services were found"
return
}
# Display as a table
$serviceList.serviceConfig | Select name, title | Format-Table -Wrap | more
}
function Get-ServiceList {
Param([string][Parameter(Position = 0, Mandatory = $true)] $accessToken)
# Build the url
# https://cloud.google.com/service-infrastructure/docs/service-management/reference/rest/v1/services/list
$url="https://servicemanagement.googleapis.com/v1/services"
# Build the Invoke-RestMethod parameters
# https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-5.1
$params = #{
Headers = #{
Authorization = "Bearer " + $accessToken
}
Method = 'Get'
ContentType = "application/json"
}
# Create an array to store the API output which is an array of services
$services = #()
# Google APIs page the output
$nextPageToken = $null
do {
if ($nextPageToken -eq $null)
{
$uri = $url
} else {
$uri = $url + "?pageToken=$nextPageToken"
}
try {
# Get the list of services
$output = Invoke-RestMethod #params -Uri $uri
} catch {
Write-Host "Error: REST API failed." -ForegroundColor Red
Write-Host "URL: $url" -ForegroundColor Red
Write-Host $_.Exception.Message -ForegroundColor Red
return $services
}
# Debug: Display as JSON
# $output | ConvertTo-Json
# Append services to list
$services += $output.services
$nextPageToken = $output.nextPageToken
} while ($nextPageToken -ne $null)
return $services
}
############################################################
# Main Program
############################################################
$accessToken = Get-AccessToken
$serviceList = Get-ServiceList $accessToken
Display-ServiceTable $serviceList
Command-line tool JQ
I want to pass a cookie file to youtube-dl coming from a browser extension. The extension sends cookie in raw format i.e. like VISITOR_INFO1_LIVE=_SebbjYciU0; YSC=tSqadPjjfd8; PREF=f4=4000000
But youtube-dl takes netscape jar format cookies (as far as I know).
If I put the raw text cookie in a file and pass it to --cookies=file.txt argument youtube-dl raises an exception.
I cannot manage to to convert my raw cookies to jar cookies and save to a file in the disk. I have searched for a solution but did not find any acceptable solution.
I had very similar problem to convert "curl" cookies into wget one ...
Netscape CookieJar format is straight forward see *
so I took few minutes to write a quick perl script and generate a cookiejar
here is the code (downloadable here)
#!/usr/bin/perl
# usage
# perl cookiejar.pl {{url}} '{{cookie-string}}'
use YAML::Syck qw(Dump);
my $expires = $^T + 86400;
my $path = '/';
my $url = shift;
my $cookies = shift;
printf "--- # %s at %u\n",__FILE__,$^T;
my $domain;
my $p = index($url,'://')+3;
my $l = index($url,'/',$p);
$domain = substr($url,$p,$l-$p);
my $dots = () = $domain =~ /\./g;
printf "dots: %s\n",$dots;
if ($dots > 1) {
$domain = substr($domain,index($domain,'.'));
} else {
$domain = '.'.$domain;
}
printf "domain: %s\n",$domain;
printf "url: %s\n",$url;
local *F; open F,'>','cookiejar.txt' or warn $!;
print F <<EOT;
# Netscape HTTP Cookie File
# http://curl.haxx.se/rfc/cookie_spec.html
# This is a generated file! Do not edit.
# domain: $domain
# url: $url
EOT
my #cookies = split'; ',$cookies;
printf "--- %s...\n",Dump(\#cookies);
foreach my $cookie (#cookies) {
my ($key,$value) = split('=',$cookie);
if (! $seen{$key}++) {
# domain access path sec expire cookie value
printf F "%s\t%s\t%s\t%s\t%s\t%s\t%s\n",$domain,'TRUE',$path,'FALSE',$expires,$key,$value;
}
}
close F;
print "info: cookiejar.txt created\n";
printf "cmd: wget --load-cookie-file cookiejar.txt --referer=%s -p %s\n",$domain,$url;
printf "cmd: youtube-dl --cookie cookiejar.txt -referer %s %s\n",$domain,$url;
exit $?;
1; # $Source: /my/perl/scripts/cookiejar.pl $
you run it as followed :
perl cookiejar.pl https://example.com/ 'cookie1=value1; cookie2=value2'
note:
You might need to install YAML::Syck with
cpan install YAML::Syck
or just comment out the Dump() call and the use YAML::Syck line.
I am actually Monitoring a directory for creation of new files(.log files) these files are generated by some tool and tool writes log entries after sometime of the creation of the same file, During this time file will be empty.
and how can i wait until something is written to the log and reason being based on the log entries i will be invoking different script!,
use strict;
use warnings;
use File::Monitor;
use File::Basename;
my $script1 = "~/Desktop/parser1.pl";
my $scrip2t = "~/Desktop/parser2.pl";
my $dir = "~/Desktop/tool/logs";
sub textfile_notifier {
my ($watch_name, $event, $change) = #_;
my #new_file_paths = $change->files_created; #The change object has a property called files_created,
#which contains the names of any new files
for my $path (#new_file_paths) {
my ($base, $fname, $ext) = fileparse($path, '.log'); # $ext is "" if the '.log' extension is
# not found, otherwise it's '.log'.
if ($ext eq '.log') {
print "$path was created\n";
if(-z $path){
# i need to wait until something is written to log
}else{
my #arrr = `head -30 $path`;
foreach(#arr){
if(/Tool1/){
system("/usr/bin/perl $script1 $path \&");
}elsif(/Tool1/){
system("/usr/bin/perl $script2 $path \&");
}
}
}
}
my $monitor = File::Monitor->new();
$monitor->watch( {
name => $dir,
recurse => 1,
callback => {files_created => \&textfile_notifier}, #event => handler
} );
$monitor->scan;
while(1){
$monitor->scan;
}
Basically i am grepping some of the important information from the logs.
For such formulation of your question, something like this might help you:
use File::Tail;
# for log file $logname
my #logdata;
my $file = File::Tail->new(name => $logname, maxinterval => 1);
while (defined(my $newline = $file->read)) {
push #logdata, $newline;
# the decision to launch the script according to data in #logdata
}
Read more here
You are monitoring just the log file creation. Maybe you could use a sleep function inside the call back sub to wait for the log file been wrote. You could monitor file changes too, because some log files could be extended.
I want to use AWS S3 to store image files for my website. I create a bucket name images.mydomain.com which was referred by dns cname images.mydomain.com from AWS Route 53.
I want to check whether a folder or file exists; if not I will create one.
The following php codes work fine for regular bucket name using stream wrapper but fails for this type of bucket name such as xxxx.mydomain.com. This kind of bucket name fails in doesObjectExist() method too.
// $new_dir = "s3://aaaa/akak3/kk1/yy3/ww4" ; // this line works !
$new_dir = "s3://images.mydomain.com/us000000/10000" ; // this line fails !
if( !file_exists( $new_dir) ){
if( !mkdir( $new_dir , 0777 , true ) ) {
echo "create new dir $new_dir failed ! <br>" ;
} else {
echo "SUCCEED in creating new dir $new_dir <br" ;
}
} else {
echo "dir $new_dir already exists. Skip creating dir ! <br>" ;
}
I got the following message
Warning: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: "images.mydomain.com.s3.amazonaws.com". in C:\AppServ\www\ecity\vendor\aws\aws-sdk-php\src\Aws\S3\StreamWrapper.php on line 737
What is the problem here?
Any advise on what to do for this case?
Thanks!