I'm having a problem using AWS Device Farm, but the problem is that Amazon is not very specific on what goes wrong.
After I created a new run and try to upload my apk file it shows this message before getting to finish the upload:
There was a problem uploading your file. Please try again.
There are no error codes. I have already tried several times using a signed app for debug and for release, but neither of them finishes the upload. Is this a temporal problem in Amazon cloud or it is a known error?
I work for the AWS Device Farm team.
Sorry to hear that you are running in to issues.
If it is the App that is giving you an error you should check if you are able to run the app locally on a real device. If yes, then this should be working on device farm. At times, app build for emulators/simulators are uploaded and can cause the error.
If it is the test apk that you are uploading then the same thing as point 1 should be confirmed.
If both of the points above are true and you are still getting an error please start a thread on AWS Device Farm forums and we can take a closer look at your runs or you can share your run url here and we can take a look.
Would it be possible to try and upload this file using the CLI[1]? The create-upload command will do the same thing the web console is doing and it can return more information than the web console.
aws devicefarm create-upload --project-arn <yourProjectsArn> --name <nameOfFile> --type <typeOfAppItIs> --region us-west-2
This will return a upload-arn which you will need to use later so keep it handy. If you need a more verbosity on any of the CLI commands listed here you can use the --debug option.
The create-upload command will return a presigned-url which you can do a PUT command on.
curl:
curl -T someAppFileWithSameNameAsSpecifiedBefore "presigned-url"
Once you have the file now uploaded you can do a get-upload command to see the status of the upload and if there are any problems this will show why.
aws devicefarm get-upload --arn <uploadArnReturnToYouFromPreviousCommand> --region us-west-2
My output looks like this:
{
"upload": {
"status": "SUCCEEDED",
"name": "app-debug.apk",
"created": 1500080938.105,
"type": "ANDROID_APP",
"arn": "arn:aws:devicefarm:us-west-2:<accountNum>:upload:<uploadArn>",
"metadata": "{\"device_admin\":false,\"activity_name\":\"com.xamarin.simplecreditcardvalidator.MainActivity\",\"version_name\":\"1.1\",\"screens\":[\"small\",\"normal\",\"large\",\"xlarge\"],\"error_type\":null,\"sdk_version\":\"21\",\"package_name\":\"com.xamarin.simplecreditcardvalidator\",\"version_code\":\"2\",\"native_code\":[],\"target_sdk_version\":\"25\"}"
}}
Please let me know what this returns and I look forward to your response.
Best Regards
James
[1] http://docs.aws.amazon.com/cli/latest/reference/devicefarm/create-upload.html
Also used this article to learn how to do most of this:
https://aws.amazon.com/blogs/mobile/get-started-with-the-aws-device-farm-cli-and-calabash-part-1-creating-a-device-farm-run-for-android-calabash-test-scripts/
Related
would appreciate any help with this:
I've followed the guide for AWS copilot here: https://aws.github.io/copilot-cli/docs/getting-started/first-app-tutorial/ and then the guide for creating a pipeline and connecting it to github here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/. That all appears to have worked and I can view the react app I'm working on at the url indicated in aws.
My problem is that when I make changes to my code and then push it to the tracked github branch, the changes don't appear when viewing the app at the url. However, when I make the push to github, the pipeline does register that a change has occured. It indicates that a change has been made and goes through the flow of creating a new build. But whatever I try, the changes don't seem to actually show up.
I assume that I'm missing something simple here, and that for some reason, docker is building the app based on the original code. But I can't figure out why that would be. Maybe something is weird with my DockerFile?
My docker file looks like this:
FROM node:16.14
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . ./
CMD ["npm", "run", "server"]
My understanding of how this should work, is that I push up new code to github, that is sent to the aws pipeline and a new image is generated based on that code, which is then used to create a container that is hosted on ECS. But clearly I am missing something.
copilot deploy does work. I'm unsure if
the problem is that my pipeline is successfully building (as it does not throw an error in the console) and then just not hosting it at the same url as copilot deploy. Or
the pipeline is hitting an error that just doesn't show up in the pipeline console. Digging into the logs I find this:
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
Which seems to point towards the second option. Any suggestions on how resolve whatever it going on in the container if that is the problem?
The error suggests that I check build logs but these are the build logs. Are there more granular build logs I can examine?
When running containers in ECS, unless your container is already crashing because of an error, it often won't pick up code changes from your new image unless you force a new deployment. You can do this from the command line using the AWS CLI with the following:
aws ecs update-service --cluster <cluster_name> --service <service_name> --force-new-deployment --profile <aws_profile_name>
Note that the profile is optional if you're using your default aws cli configuration profile.
I have to update a website on aws using serverless deploy.
This website were not created by me, it's the first time I work with serverless and AWS solutions.
I have the source code, deploy files, etc, from the last person in charge.
I run a before-deploy.js script to create all local files, check them to see if the updates went ok. Everything's fine.
But anytime I try to deploy using the simple command "serverless deploy", it fails printing this error :
CREATE_FAILED: MainStaticSite (AWS::S3::Bucket)
“mywebsite.com” already exists
I don’t really understand this error, as I know the website already exists but I just want to update it.
I tried more specific commands like :
serverless deploy -v --stage production --region eu-west-1
But this one only shows this output :
Framework Core: 3.10.1
Plugin: 6.2.0
SDK: 4.3.2
PS
And doesn't updates the website.
I changed the keys on AWS, maybe it's because of this ?
Looks like he doesn’t want to overwrite the existing files, but no idea why.
If someone has an answer or a lead.
Thank you :)
I'm new to AWS CLI (and programming), but I've looked through documentation and posted questions and can't find this addressed, I must be missing something basic?
How do I save the output? I'd like to run AWS S3 Sync to backup my data overnight, and I'd like to see a log report in the morning of what happened.
At this point, I can run AWS from a command prompt:
aws s3 sync "my local directory" s3://mybucket
I've set output format to Text in the config. But I'm only seeing the text in the command prompt. How can I export it as a log file?
Is this not possible, what am I missing?
Many thanks in advance,
Matthew
aws s3 sync "my local directory" s3://mybucket --debug 2> "local path\logname.txt"
Not only did I figure out adding > filename to the end of the command, but I also figured out that when saving this as a batch file, it won't run as a scheduled task in Windows Server 2008 r2, or Windows 7, if it contains drive mappings. UNC paths are required.
Thanks!
Matthew
this perfectly worked for me
aws cloudformation describe-stack-events --stack-name "stack name" --debug 2> "C:\Users\ravi\Desktop\CICDWORKFolder\RedshiftFolder\logname.txt"
I have a cluster up and running. I am trying to add a step to run my code. The code itself works fine on a single instance. Only thing is I can't get it to work off S3.
aws emr add-steps --cluster-id j-XXXXX --steps Type=spark,Name=SomeSparkApp,Args=[--deploy-mode,cluster,--executor-memory,0.5g,s3://<mybucketname>/mypythonfile.py]
This is exactly what examples show I should do. What am I doing wrong?
Error I get:
Exception in thread "main" java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-memory, 0.5g, --executor-cores, 2, --primary-py-file, s3://<mybucketname>/mypythonfile.py, --class, org.apache.spark.deploy.PythonRunner)
Usage: org.apache.spark.deploy.yarn.Client [options]
Options:
--jar JAR_PATH Path to your application's JAR file (required in yarn-cluster
mode)
.
.
.
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Command exiting with ret '1'
When I specify as this instead:
aws emr add-steps --cluster-id j-XXXXX --steps Type=spark,Name= SomeSparkApp,Args=[--executor-memory,0.5g,s3://<mybucketname>/mypythonfile.py]
I get this error instead:
Error: Only local python files are supported: Parsed arguments:
master yarn-client
deployMode client
executorMemory 0.5g
executorCores 2
EDIT: IT gets further along when I manually create the python file after SSH'ing into the cluster, and specifying as follows:
aws emr add-steps --cluster-id 'j-XXXXX' --steps Type=spark,Name= SomeSparkApp,Args=[--executor-memory,1g,/home/hadoop/mypythonfile.py]
But, not doing the job.
Any help appreciated. This is really frustrating as a well documented method on AWS's own blog here https://blogs.aws.amazon.com/bigdata/post/Tx578UTQUV7LRP/Submitting-User-Applications-with-spark-submit does not work.
I will ask, just in case, you used your correct buckets and cluster ID-s?
But anyways, I had similar problems, like I could not use --deploy-mode,cluster when reading from S3.
When I used --deploy-mode,client,--master,local[4] in the arguments, then I think it worked. But I think I still needed something different, can't remember exactly, but I resorted to a solution like this:
Firstly, I use a bootstrap action where a shell script runs the command:
aws s3 cp s3://<mybucket>/wordcount.py wordcount.py
and then I add a step to the cluster creation through the SDK in my Go application, but I can recollect this info and give you the CLI command like this:
aws emr add-steps --cluster-id j-XXXXX --steps Type=CUSTOM_JAR,Name="Spark Program",Jar="command-runner.jar",ActionOnFailure=CONTINUE,Args=["spark-submit",--master,local[4],/home/hadoop/wordcount.py,s3://<mybucket>/<inputfile.txt> s3://<mybucket>/<outputFolder>/]
I searched for days and finally discovered this thread which states
PySpark currently only supports local
files. This does not mean it only runs in local mode, however; you can
still run PySpark on any cluster manager (though only in client mode). All
this means is that your python files must be on your local file system.
Until this is supported, the straightforward workaround then is to just
copy the files to your local machine.
I've run a job on AWS's EMR, and stored the output in the EMR job's HDFS. I am then trying to copy the result to S3 via distcp or s3distcp, but both are failing as described below. (Note: the reason I'm not just sending my EMR job's output directly to S3 is due to the (currently unresolved) problem I describe in Where is my AWS EMR reducer output for my completed job (should be on S3, but nothing there)?
For distcp, I run (following this post's recommendation):
elastic-mapreduce --jobflow <MY-JOB-ID> --jar \
s3://elasticmapreduce/samples/distcp/distcp.jar \
--args -overwrite \
--args hdfs:///output/myJobOutput,s3n://output/myJobOutput \
--step-name "Distcp output to s3"
In error log (/mnt/var/log/hadoop/steps/8), I get:
With failures, global counters are inaccurate; consider running with -i
Copy failed: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: <SOME-REQUEST-ID>, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: <SOME-EXT-REQUEST-ID>
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:548)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:288)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
...
For s3distcp, I run (following the s3distcp documentation):
elastic-mapreduce --jobflow <MY-JOB-ID> --jar \
s3://us-east-1.elasticmapreduce/libs/s3distcp/1.0.4/s3distcp.jar \
--args '--src,/output/myJobOutput,--dest,s3n://output/myJobOutput'
In the error log (/mnt/var/log/hadoop/steps/9), I get:
java.lang.RuntimeException: Reducer task failed to copy 1 files: hdfs://10.116.203.7:9000/output/myJobOutput/part-00000 etc
at com.amazon.elasticmapreduce.s3distcp.CopyFilesReducer.close(Unknown Source)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:537)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:428)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Any ideas what I'm doing wrong?
Update: Someone responding on the AWS Forums to a post about a similar distcp error mentions the IAM user user permissions, but I don't know what this means (edit: I haven't created any IAM users, so it is using the defaults); hopefully it helps pinpoint my problem.
Update 2: I noticed this error in namenode log file (when re-running s3distcp).. I'm going to look into default EMR permissions to see if it is my problem:
2012-06-24 21:57:21,326 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 40 on 9000): got exception trying to get groups for user job_201206242009_0005
org.apache.hadoop.util.Shell$ExitCodeException: id: job_201206242009_0005: No such user
at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:966)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5160)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5143)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:1992)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:837)
...
Update 3: I contact AWS Support, and they didn't see a problem, so am now waiting to hear back from their engineering team. Will post back as I hear more
Try this solution. At least it worked for me. (I've successfully copied dir with 30Gb file).
I'm not 100% positive, but after reviewing my commands above, I noticed that my destination on S3 does NOT specify a bucket name. This appears to simply be a case of rookie-ism.