I'm trying to use the erlcloud library for S3 uploads in my app. As a test, I'm trying to get it to list buckets via an iex console:
iex(4)> s3 = :erlcloud_s3.new("KEY_ID", "SECRET_KEY")
...
iex(5)> :erlcloud_s3.list_buckets(s3)
** (ErlangError) erlang error: {:aws_error, {:socket_error, :timeout}}
(erlcloud) src/erlcloud_s3.erl:909: :erlcloud_s3.s3_request/8
(erlcloud) src/erlcloud_s3.erl:893: :erlcloud_s3.s3_xml_request/8
(erlcloud) src/erlcloud_s3.erl:238: :erlcloud_s3.list_buckets/1
I've checked that inets, ssl, and erlcoud are all started, and I know the credentials work fine, because I've tested them in a similar fashion with a Ruby library in irb.
I've tried configuring it with a longer timeout, but no matter how high I set it I still get this error.
Any ideas? Or approaches I could take to debug this?
I could simulate the same error, and could resolve it by replacing double-quote with single-quote.
> iex(4)> s3 = :erlcloud_s3.new('KEY_ID', 'SECRET_KEY')
> iex(5)> :erlcloud_s3.list_buckets(s3)
Assuming the double-quote was used, it may be caused by a type mismatch between string and char-list.
Related
I'm doing what I think is a very simple thing to check that alpakka is working:
val awsCreds = AwsBasicCredentials.create("xxx", "xxx")
val credentialsProvider = StaticCredentialsProvider.create(awsCreds)
implicit val staticCreds = S3Attributes.settings(S3Ext(context.system).settings.withCredentialsProvider(credentialsProvider)
.withS3RegionProvider(new AwsRegionProvider {val getRegion: Region = Region.US_EAST_2}))
val d = S3.checkIfBucketExists(action.bucket)
d foreach { msg => log.info("mgs: " + msg.toString)}
When I run this I get
msgs: NotExists
But the bucket referred to by action.bucket does exist, and I can access it using these credentials. What's more, when I modify the credentials (by changing the secret key), I get the same message. What I should get, according to the documentation, is AccessDenied.
I got to this point because I didn't think the environment was picking up on the right credentials - hence all the hard-coded values. But now I don't really know what could be causing this behavior.
Thanks
Update: The action object is just a case class consisting of a bucket and a path. I've checked in debug that action.bucket and action.path point to the things they should be - in this case an S3 bucket. I've also tried the above code with just the string bucket name in place of action.bucket.
Just my carelessness . . .
An errant copy added an extra implicit system to the mix. Some changes were made to implicit materializers in akka 2.6 and I think those, along with the extra implicit actor system, made for a weird mix.
I run wso2 apim 2.0.1 snapshot on windows, and when i modify subscription tier and save, it report below exception, and although the bill plan changed , but the API still display FREE label.
[2016-08-12 15:30:02,504] ERROR - EventProcessorAdminService Error while deleting the execution plan file
org.wso2.carbon.event.processor.core.exception.ExecutionPlanConfigurationException: Error while deleting the execution plan file
at org.wso2.carbon.event.processor.core.internal.util.EventProcessorConfigurationFilesystemInvoker.delete(EventProcessorConfigurationFilesystemInvoker.java:124)
......
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 2: /D:/emman/PROJECT/AA/apimgmt/wso2am-2.0.1-SNAPSHOT/repository/deployment/server/\executionplans
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.wso2.carbon.event.processor.core.internal.util.EventProcessorUtil.validateFilePath(EventProcessorUtil.java:387)
at org.wso2.carbon.event.processor.core.internal.util.EventProcessorConfigurationFilesystemInvoker.delete(EventProcessorConfigurationFilesystemInvoker.j
ava:109)
... 65 more
[2016-08-12 15:30:02,539] ERROR - ThrottlePolicyDeploymentManager Error while deploying policy to global policy server.Error while deleting the execution plan file
[2016-08-12 15:30:02,541] INFO - subscription-policy-edit:jag SubscriptionPolicy [policyName=Gold, description=Allows 5000 requests per minute, defaultQuotaPolicy=QuotaPolicy [type=requestCount, limit=RequestCountLimit [requestCount=5000,
toString()=Limit [timeUnit=min, unitTime=1]]]rateLimitCount=-1, tenantId=-1234,ratelimitTimeUnit=NA]
As per your logs, error happens due to invalid file path below.
/D:/emman/PROJECT/AA/apimgmt/wso2am-2.0.1-SNAPSHOT/repository/deployment/server/\executionplans
I had a look at code. It reads the first part of this path from <RepositoryLocation> tag of carbon.xml file. By default, it should look like this.
<RepositoryLocation>${carbon.home}/repository/deployment/server</RepositoryLocation>
Please verify if you have the same in carbon.xml. If you are getting this error with the same config, please change it to the absolute path like below and try again.
D:\emman\PROJECT\AA\apimgmt\wso2am-2.0.1-SNAPSHOT\repository\deployment\server
To make your path more linux-like I used this trick.
Share your carbon home folder. Change carbon.xml setting RepositoryLocation in //machinenaam/share.
I'm using Travis CI for a gem I'm developing and ran into a strange error (links are at the end of the question).
The gem is storing some information serialized with YAML which isn't built manually, but using YAML.dump and later loaded again with YAML.load.
The following lines are used to dump and load a hash to/from YAML:
headers[:ar_mailer_settings] = YAML.dump(settings)
...
ar_settings = YAML.load(mail['ar_mailer_settings'].value)
The later line seems to be producing an error on Travis CI, but when I run the tests locally using the same binary versions, everything runs perfectly fine:
Psych::SyntaxError: (<unknown>): mapping values are not allowed in this context at line 1 column 22
/home/travis/.rvm/rubies/ruby-2.0.0-p576/lib/ruby/2.0.0/psych.rb:205:in `parse'
...
/home/travis/build/Stex/ar_mailer_revised/lib/action_mailer/ar_mailer.rb:84:in `deliver!'
I put a simple puts into the deliver! method to see if there would be a difference in the stored values, and it seems that Travis CI is ignoring newlines in the generated YAML which then causes a parse error:
Travis:
"--- smtp_settings: :address: localhost :port: 25 :domain: localhost.localdomain :user_name: some.user :password: some.password :authentication: :plain :enable_starttls_auto: true "
Locally:
"---\nsmtp_settings:\n :address: localhost\n :port: 25\n :domain: localhost.localdomain\n :user_name: some.user\n :password: some.password\n :authentication: :plain\n :enable_starttls_auto: true\n"
Interestingly, I didn't change anything regarding these methods before the Travis CI was failing, so I'm not sure if I'm simply overlooking something here or if it's some kind of incompatibility issue.
Can I do something to preserve the newline characters?
Edit: Additional Information
The gem allows setting custom SMTP settings and attributes for single email records.
These can be set directly when generating the email in an ActionMailer::Base instance see here for a dummy mailer
To transport these custom settings to ActionMailer's deliver!-method which actually creates a new email record, I serialize these settings via YAML, save them in the email header temporarily and restore them later see ar_mailer_setting and deliver! here
The source code which raises the error: here
The complete Travis CI output: here
If more information is needed, please let me know and I'll add it to the question.
Thanks in advance!
I am trying to fetch Raik objects using simple filters.
I have enabled search on the bucket before storing objects to it, and I try the following:
MapReduceResult result = riakClient.
mapReduce("serviceProvider", "name:oved1").
addMapPhase(new NamedJSFunction("Riak.mapValuesJson"), true).execute();
I get this exception:
com.basho.riak.client.RiakException: java.io.IOException: {"error":"map_reduce_error"}
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:80)
at com.att.cso.omss.datastore.riak.controllers.RiakBaseController.getAllServiceProvider(RiakBaseController.java:339)
at com.att.cso.omss.datastore.riak.App.serviceProviderTests(App.java:64)
at com.att.cso.omss.datastore.riak.App.main(App.java:38)
Caused by: java.io.IOException: {"error":"map_reduce_error"}
at com.basho.riak.client.raw.http.ConversionUtil.convert(ConversionUtil.java:588)
at com.basho.riak.client.raw.http.HTTPClientAdapter.mapReduce(HTTPClientAdapter.java:386)
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:78)
... 3 more
any idea what am I missing?
Was able to fix this issue...
apparently you need to do 2 things prior to storing objects that need to be searched in the future:
Enabled search in app.config (/etc/riak):
{riak_search, [{enabled, true}]}
Enable search on the bucket:
Bucket bucket = riakClient.createBucket(bucketName).enableForSearch().execute();
After doing that, this returns values:
MapReduceResult result = riakClient.
mapReduce(bucketName, "name:9").
addMapPhase(new NamedJSFunction("Riak.mapValuesJson"), true).
execute();
I'm new to whirr and AWS so apologies in advance if I'm asking something silly.
I'm following the directions here to set up whirr and
bin/whirr launch-cluster --config hadoop.properties
fails with the following:
[~/src/cloudera/whirr-0.1.0+23]$ bin/whirr version rvm:ruby-1.8.7-p299
Apache Whirr 0.1.0+23
[~/src/cloudera/whirr-0.1.0+23]$ bin/whirr launch-cluster --config hadoop.properties rvm:ruby-1.8.7-p299
Launching myhadoopcluster cluster
Exception in thread "main" com.google.inject.CreationException: Guice creation errors:
1) No implementation for java.lang.String annotated with #com.google.inject.name.Named(value=jclouds.credential) was bound.
while locating java.lang.String annotated with #com.google.inject.name.Named(value=jclouds.credential)
for parameter 2 at org.jclouds.aws.filters.FormSigner.<init>(FormSigner.java:91)
at org.jclouds.aws.config.AWSFormSigningRestClientModule.provideRequestSigner(AWSFormSigningRestClientModule.java:66)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:410)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:166)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:118)
at com.google.inject.InjectorBuilder.build(InjectorBuilder.java:100)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at org.jclouds.rest.RestContextBuilder.buildInjector(RestContextBuilder.java:141)
at org.jclouds.compute.ComputeServiceContextBuilder.buildInjector(ComputeServiceContextBuilder.java:53)
at org.jclouds.aws.ec2.EC2ContextBuilder.buildInjector(EC2ContextBuilder.java:101)
at org.jclouds.compute.ComputeServiceContextBuilder.buildComputeServiceContext(ComputeServiceContextBuilder.java:66)
at org.jclouds.compute.ComputeServiceContextFactory.buildContextUnwrappingExceptions(ComputeServiceContextFactory.java:72)
at org.jclouds.compute.ComputeServiceContextFactory.createContext(ComputeServiceContextFactory.java:114)
at org.apache.whirr.service.ComputeServiceContextBuilder.build(ComputeServiceContextBuilder.java:41)
at org.apache.whirr.service.hadoop.HadoopService.launchCluster(HadoopService.java:84)
at org.apache.whirr.service.hadoop.HadoopService.launchCluster(HadoopService.java:61)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:61)
at org.apache.whirr.cli.Main.run(Main.java:65)
at org.apache.whirr.cli.Main.main(Main.java:91)
My hadoop.properties file has an AWS Access Key and Secret Access Key.
Any pointers on what I might have done wrong and what I need to do to fix this?
Thanks!
Okay so this appears to be a problem with the syntax in my hadoop.properties file. In the process of copying my keys across from the AWS management console, "Whirr.credential" got truncated to "Whirr.cred."
A classic face palm moment!
Anyway, leaving this up so that anyone googling for this error message knows to go triple check their hadoop.properties file!