How to push images with appium on AWS device farm devices - amazon-web-services

I've created a code, that works for pushing images with appium locally on both android and iOS devices.
the images are in the appium project's /src/main/resources/images folder
String basePath = System.getProperty("user.dir");
String imagePath = "/src/main/resources/images/testImage.jpeg";
File imageToPush= new File(basePath+imagePath);
The problem is, that when this code runs on AWS I cannot find the images ( and don't know how/where to find them).
I've tried multiple ways to construct the basePath but so far with no success

According to Appium push file documentation, you can provide the directory to place the files.
You can try to push the file using the Android SD card directory or the iOS application data folder directory as the parameter. You can then access the files in Android SD Card or the iOS application data folder.
Alternatively, you can also use the Add Extra Data feature to add the images in form of a zip file when you schedule a run on the console or you provide a extraDataPackageArn in the schedule-run CLI command. extraDataPackageArn is generated from the create-upload CLI command. In this case, you can use create-upload CLI command on the images in form of a zip file you want to push to the device to get the extraDataPackageArn.
You can then access the extra data in Android SD card or the iOS application data folder according to https://aws.amazon.com/device-farm/faqs/,
Q: I’d like to supply media or other data for my app to consume. How do I do that?
You can provide a .zip archive up to 4 GB in size. On Android, it will be extracted to the root of external memory; on iOS, to your app’s sandbox.
For Android expansion files (OBB), we will automatically place the file into the location appropriate to the OS version.

Related

Azure Data Factory HDFS dataset preview error

I'm trying to connect to the HDFS from the ADF. I created a folder and sample file (orc format) and put it in the newly created folder.
Then in ADF I created successfully linked service for HDFS using my Windows credentials (the same user which was used for creating sample file):
But when trying to browse the data through dataset:
I'm getting an error: The response content from the data store is not expected, and cannot be parsed.:
Is there something I'm doing wrongly or it is kind of permissions issue?
Please advise
This appears to be a generic issue, you need to point to a file with appropriate extension rather than a folder itself. Also make sure you are using a supported data store activity.
You can follow this official MS doc to use HDFS server with Azure Data Factory

Springboot server in Elastic Beanstalk creates files that I can't see

I have a Springboot server that is deployed to an Elastic Beanstalk environment in AWS. The basic functionality is this:
1. Upload a file to the server
2. The server processes file by doing some data manipulation.
3. Then the file that is created is sent to a user via email.
The strange thing is that, the functionality mentioned above is working. The output file is sent to my email inbox successfully. However, the file cannot be seen when SSHed into the instance. The entire directory that gets created for the data manipulation is just not there. I have looked everywhere.
To test this, I even created a simple function in my Springboot Controller like this:
#GetMapping("/")
public ResponseEntity<String> dummyMethod() {
// TODO : remove line below after testing
new File(directoryToCreate).mkdirs();
return new ResponseEntity<>("Successful health check. Status: 200 - OK", HttpStatus.OK);
}
If I use Postman to hit this endpoint, the directory CANNOT be seen via the terminal that I am SSHed into. The program is working so I know that the code is correct in that sense, but its like the files and directories are invisible to me.
Furthermore, if I were to run the server locally (using Windows OR Linux) and hit this endpoint, the directory is successfully created.
Update:
I found where the app lives in the environment at /var/app. But my folders and files are still not there, only the source code files, ect are there. The files that my server is supposed to be creating are still missing. I can even print out the absolute path to the file after creating it, but that file still doesn't exist. Here is an example:
Files.copy(source, dest);
logger.info("Successfully copied file to: {}", dest.getAbsolutePath());
will print...
Successfully copied file to: /tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58/results_map_GVA.csv
That path DOES NOT exist in my server, but I CAN send it to me via email from the server code after being processed. But if I SSH into the instance and go to that path, nothing is there.
If I use the command: find . -name "GVA*" (to search for the file I am looking for) then it prints this:
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
But this looks like it is keeping track of differences between versions of files since I see diff and merged in the file path. I just want to find where that file is actually residing.
If you need to store an uploaded file somewhere from a Spring BOOT app, look at using an Amazon S3 bucket as opposed to writing the file to a folder on the server. For example, assume you are working with a Photo app and the photos can be uploaded via the Spring BOOT app. Instead of placing this in a directory on the server, use the Amazon S3 Java API to store the file in an Amazon S3 bucket.
Here is an example of using a Spring BOOT app and handling uploaded files by placing them in a bucket.
Creating a dynamic web application that analyzes photos using the AWS SDK for Java
This example app also shows you how to use the SES API to send data (a report in this example) to a user via email.

Is there a way to zip a folder of files into one zip, gzip, bzip, etc file using Google Cloud?

My Goal: I have hundreds of Google Cloud Storage folders with hundreds of images in them. I need to be able to zip them up and email a user a link to a single zip file.
I made an attempt to zip these files on an external server using PHP's zip function, but that has proved to be fruitless given the ultimate size of the zip files I'm creating.
I have since found that Google Cloud offers a Bulk Compress Cloud Storage Files utility (docs are at https://cloud.google.com/dataflow/docs/guides/templates/provided-utilities#api). I was able to successfully call this utility, but for zips each file into it's own bzip or gzip file.
For instance, if I had the following files in the folder I'm attempt to zip:
apple.jpg
banana.jpg
carrot.jpg
The resulting outputDirectory would have:
apple.bzip2
banana.bzip2
carrot.bzip2
Ultimately, I'm hoping to create a single file named fruits.bzip2 that can be unzipped to reveal these three files.
Here's an example of the request parameters I'm making to https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/templates:launch?gcsPath=gs://dataflow-templates/latest/Bulk_Compress_GCS_Files
{
"jobName": "ziptest15",
"environment": {
"zone": "us-central1-a"
},
"parameters": {
"inputFilePattern": "gs://PROJECT_ID.appspot.com/testing/samplefolder1a/*.jpg",
"outputDirectory": "gs://PROJECT_ID.appspot.com/testing/zippedfiles/",
"outputFailureFile": "gs://PROJECT_ID.appspot.com/testing/zippedfiles/failure.csv",
"compression": "BZIP2"
}
}
The best way to achieve that is to create an app that:
Download locally all the file of a GCS prefix (that you name "directory" but directory doesn't exist on GCS, only file with the same prefix)
Create an archive (can be a ZIP or a TAR. ZIP won't really compress the image. The image format is already a compressed format. You especially want only one 1 with all the image in it)
Upload the archive to GCS
Clean the files
Now you have to choose where to run this app.
On Cloud Run, you are limited by the space that you have in memory (for now, new feature are coming). For now you are limited to 8Gb of memory (and soon 16Gb), your app will be able to process total image size of 45% of the memory capacity (45% for the image size, 45% for the archive size, 10% for the app memory footprint.). Set the concurrency parameter to 1.
If you need more space, you can use Compute Engine.
Set up a startup script that run your script and stop automatically the VM at the end. The script read the parameter from the metadata server and run your app with the correct parameters
Before each run, update the Compute Engine metadata with the directory to process (and maybe other app parameter
-> The issue is that you can only run 1 process at a time. Or you need to create a VM for each job, and then delete the VM at the end of the startup script instead of stopping the VM
A side solution is to use Cloud Build. Run a Build with the parameters in the substitutions variables and perform the job in Cloud Build. You are limited to 10 builds in parallel. Use the diskSizeGb build option to set the correct disk size according to your file size requirements.
The dataflow template only zip each file unitary, and don't create an archive.

Sitecore - How to load images on CD server from CM server?

I have two separated servers: one is CD server and one is CM Server. I upload images on CM server and publish them. On the web database, although I saw the the images under Media Library item
But they aren't displayed on CD server (e.g on website), it indicates that the images not found. Please help me to know how I can solve that problem or I do need some configuration for that.
Many thanks.
Sitecore media items can carry actual media file either as:
Blob in the database - everything works automatically OOB
Files on the file system - one needs to configure either WebDeploy, or DFS
Database resources are costly, you might not want to waste them on something that can be achieved by free tools.
Since WebDeploy by default locates modified files by comparing file hashes between source, and target, it will become slower after a while.
You might have uploaded image in media library as a file. As such, image is stored as a File on file system. To verify this, your image item in media library will have a path value set in 'File Path' field of your image item. Such files have to be moved to file system of CD server as well.
If you uploaded your images in bulk, you can store them as blob in DB by default rather than as a File in file system using following setting-
<setting name="Media.UploadAsFiles" value="false">

How export registered servers settings in Aqua Data Studio?

Does anyone know how to export registered servers in Aqua Data Studio? Maybe there's some tricky method to do it by copying some .ini file or registry keys?
AD Studio server registrations are in [USER_HOME]/.datastudio/connections directory. You can copy your existing connections from one machine to another.
AquaFold's documentation about copying registrations from one computer to another is here:
https://www.aquaclusters.com/app/home/project/public/aquadatastudio/wikibook/Documentation16/page/128/Configuration-Connection-files#copy
*** Note: make sure you take a backup of the .datastudio before replacing files.
To Export the connections please make sure you have a fresh installation of aqua data studio on the other system and you haven't set up any new connections.
1) Simply go to C:\Users[userName]\.datastudio
copy folders, files below and place in the same location of the new system:
C:\Users[userName]\.datastudio\connections
C:\Users[userName]\.datastudio\bigquery
C:\Users[userName]\.datastudio\pfile.properties
pfile.properties has the cipher key to decrypt passwords on the system.
I initially only copied the connections folder and found none of my passwords worked anymore. Then I added the pfiles.properties file. That fixed the password problem, but when I tried to open a Query Analyzer window on any of my many MS SQL Server registered servers, I got an error: Id 18456, Level 14, State 1, Line 1 Login failed for user '<username>'.
By copying the rest of the files and subfolders in the .datastudio folder (except for the history which I didn't need and the license files as I had to renew my license anyway), the error was cleared. Bottom line, copy the entire .datastudio folder to transfer your configuration to the new machine, as is documented in the aquaclusters Wiki link: "Copying this directory to a new computer copies all of your current ADS customizations and server registrations."