Scenario: I'm uploading an image file to the server, when trying to access it, using curl, I'm getting it on every 2nd request.
e.g curl http://staging.muserver.com/system/assets/images/000/000/test.png
When i am getting 200ok, i can see that i am having ETag, when it fails i can see X-Request-Id and X-Runtime.
this error happen only on the amazon pre production, on local machine i cannot reproduce it.
Frederick was right...i didn't setup LB...but engineyard supplied by default, when disabled, everything works fine.
Related
I get an java.lang.ClassNotFoundException: org.apache.flink.api.common.serialization.RuntimeContextInitializationContextAdapters
when running my Apache Flink code locally.
It works fine when I use a local generator for my data stream.
As soon as I switch to an AWS FlinkKinesisConsumer to get me the DataStream I get the above-mentioned error.
Did I miss to add a dependency ?
I am using Sitecore 8.2 Update 6. Earlier my CM, Processing and Reporting roles are on single CM server. Now I just need to use separate Processing server and my Reporting and CM will be on one server.
I have configured my processing server as mentioned in the following url:
https://doc.sitecore.net/sitecore_experience_platform/82/setting_up_and_maintaining/xdb/configuring_servers/configure_a_processing_server
and configured my connection strings as per the following url:
https://doc.sitecore.net/sitecore_experience_platform/81/setting_up_and_maintaining/xdb/configuring_servers/database_connection_strings_for_configuring_servers
Now I have couple of questions:
1) IS there any change required in my CM or CD to know about my separate processing server
2) How can I test whether my processing server is doing the required tasks.
Thanks,
Nicks
Your CM and CD do not need to know about the processing server, but you need to make sure that processing functions are not enabled on the CM or CD.
You will know if processing is working by looking at the logs and seeing if the pipelines are executing and not throwing errors.
You will also see analytics data being processed and showing up in the reporting database. If you are not seeing analytics data, this is an indication you might have errors in processing.
Note that there are several possible reasons reporting data might not be working, but if it is succeeding at getting your new analytics data than processing is running.
I POST the following data to https://api.digitalocean.com/v2/droplets
{
"name":"1470293222",
"image":"ubuntu-16-04-x64",
"size":"512mb",
"user_data":"#!/bin/bash\ncurl http://www.myserver.com",
"region":"nyc1"
}
This should create a new droplet and run the script in user_data, but not matter what I do, I can't seem to get the script to run.
Strangely if I launch a Droplet from the DigitalOcean console, which appears NOT to use the REST API, then the userdata script appears to work OK
Does anyone have any ideas on how to make a DigitalOcean boot script work?
What response are you receiving when you send that request? Based on the DigitalOcean API documentation for creating a droplet, it appears that your size field is incorrect. Instead of using 512mb, try something like s-1vcpu-1gb.
I have two python projects running locally:
A cloud endpoints python project using the latest App Engine version.
A client project which consumes the endpoint functions using the latest google-api-python-client (v 1.5.1).
Everything was fine until I renamed one endpoint's function from:
#endpoints.method(MyRequest, MyResponse, path = "save_ocupation", http_method='POST', name = "save_ocupation")
def save_ocupation(self, request):
[code here]
To:
#endpoints.method(MyRequest, MyResponse, path = "save_occupation", http_method='POST', name = "save_occupation")
def save_occupation(self, request):
[code here]
Looking at the local console (http://localhost:8080/_ah/api/explorer) I see the correct function name.
However, by executing the client project that invokes the endpoint, it keeps saying that the new endpoint function does not exist. I verified this using the ipython shell: The dynamically-generated python code for invoking the Resource has the old function name despite restarting both the server and client dozens of times.
How can I force the api client to get always the latest endpoint api document?
Help is appreciated.
Just after posting the question, I resumed my Ubuntu PC and started Eclipse and the python projects from scratch and now everything works as expected. This sounds like a kind of a http client cache, or a stale python process, which prevented from getting the latest discovery document and generating the corresponding resource code.
This is odd as I have tested running these projects outside and inside Eclipse without success. But I prefer documenting this just in case someone else has this issue.
I get the following error:Service Invocation Exeption i am working with Version 8.7 IBM InfoSphere DataStage and QualityStage Designer and using a server job and there, i have 1 sequential file, web service, sequential file.
Any idea what could be the reason of this error ?
Make sure you have chosen proper DataStage job type and your stage that operates on web service is configured properly.
You should also check the DataStage logs to get more information about root cause of the error.