AWS MemoryDB not connecting after recreating it - amazon-web-services

I have installed aws memory db via terraform. and connected it with .net like the way below.
services.AddStackExchangeRedisCache(option =>
{
option.ConfigurationOptions = new ConfigurationOptions()
{
EndPoints =
{
{ redisConfig.Endpoint, redisConfig.Port } // Cluster endpoint
},
User = redisConfig.User,
Password = redisConfig.Pass,
Ssl = redisConfig.SSL,
AbortOnConnectFail = redisConfig.AbortConnect,
ConnectTimeout = 60000
};
//option.Configuration = $"{redisConfig.Host}:{redisConfig.Port},password={redisConfig.Pass},ssl={redisConfig.SSL},abortConnect={redisConfig.AbortConnect}";
});
but after deleting and recreating it my memorydb suddenly stopped connecting it and throw a connection error. Error is down below
StackExchange.Redis.RedisConnectionException: No connection is active/available to service this operation: HMGET emv-test#gmail.com; UnableToConnect on clustercfg.xxxxx.xxxxx.memorydb.eu-central-1.amazonaws.com:6379/Interactive, Initializing/NotStarted, last: NONE, origin: BeginConnectAsync, outstanding: 0, last-read: 0s ago, last-write: 0s ago, keep-alive: 60s, state: Connecting, mgr: 10 of 10 available, last-heartbeat: never, global: 0s ago, v: 2.2.50.36290, mc: 1/1/0, mgr: 10 of 10 available, clientName: DESKTOP-V0GJHRO, IOCP: (Busy=2,Free=998,Min=8,Max=1000), WORKER: (Busy=5,Free=32762,Min=8,Max=32767), v: 2.2.50.36290
---> StackExchange.Redis.RedisConnectionException: UnableToConnect on clustercfg.newredis.78ank7.memorydb.eu-central-1.amazonaws.com:6379/Interactive, Initializing/NotStarted, last: NONE, origin: BeginConnectAsync, outstanding: 0, last-read: 0s ago, last-write: 0s ago, keep-alive: 60s, state: Connecting, mgr: 10 of 10 available, last-heartbeat: never, global: 0s ago, v: 2.2.50.36290
--- End of inner exception stack trace ---
at StackExchange.Redis.ConnectionMultiplexer.ThrowFailed[T](TaskCompletionSource`1 source, Exception unthrownException) in /_/src/StackExchange.Redis/ConnectionMultiplexer.cs:line 2799
It was working but all of a sudden it stopped working. I need help

Related

Google Kubernetes Engine The node was low on resource: ephemeral-storage. which exceeds its request of 0

I have a GKE cluster where I create jobs through django, it runs my c++ code images and the builds are triggered through github. It was working just fine up until now. However I have recently pushed a new commit to github (It was a really small change, like three-four lines of basic operations) and it built an image as usual. But this time, it said Pod errors: BackoffLimitExceeded, Error with exit code 137 when trying to create the job through my simple job, and the job is not completed.
I did some digging into the problem and through runnig kubectl describe POD_NAME I got this output from a failed pod:
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-nqgnl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m32s default-scheduler Successfully assigned default/xvb8zfzrhhmz-jk9vf to gke-cluster-1-default-pool-ee7e99bb-xzhk
Normal Pulling 7m7s kubelet Pulling image "gcr.io/videoo3-360019/github.com/videoo-io/videoo-render:latest"
Normal Pulled 4m1s kubelet Successfully pulled image "gcr.io/videoo3-360019/github.com/videoo-io/videoo-render:latest" in 3m6.343917225s
Normal Created 4m1s kubelet Created container jobcontainer
Normal Started 4m kubelet Started container jobcontainer
Warning Evicted 3m29s kubelet The node was low on resource: ephemeral-storage. Container jobcontainer was using 91144Ki, which exceeds its request of 0.
Normal Killing 3m29s kubelet Stopping container jobcontainer
Warning ExceededGracePeriod 3m19s kubelet Container runtime did not kill the pod within specified grace period.
The error occurs because of this line:
The node was low on resource: ephemeral-storage. Container jobcontainer was using 91144Ki, which exceeds its request of 0.
I do not have a yaml file where I set my pod informations, instead I make a django call handle configuration which looks like this:
def kube_create_job_object(name, container_image, namespace="default", container_name="jobcontainer", env_vars={}):
# Body is the object Body
body = client.V1Job(api_version="batch/v1", kind="Job")
# Body needs Metadata
# Attention: Each JOB must have a different name!
body.metadata = client.V1ObjectMeta(namespace=namespace, name=name)
# And a Status
body.status = client.V1JobStatus()
# Now we start with the Template...
template = client.V1PodTemplate()
template.template = client.V1PodTemplateSpec()
# Passing Arguments in Env:
env_list = []
for env_name, env_value in env_vars.items():
env_list.append( client.V1EnvVar(name=env_name, value=env_value) )
print(env_list)
security = client.V1SecurityContext(privileged=True, allow_privilege_escalation=True, capabilities= client.V1Capabilities(add=["CAP_SYS_ADMIN"]))
container = client.V1Container(name=container_name, image=container_image, env=env_list, stdin=True, security_context=security)
template.template.spec = client.V1PodSpec(containers=[container], restart_policy='Never')
body.spec = client.V1JobSpec(backoff_limit=0, ttl_seconds_after_finished=600, template=template.template)
return body
def kube_create_job(manifest, output_uuid, output_signed_url, webhook_url, valgrind, sleep, isaudioonly):
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform', ])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(name=f"path/to/cluster")
with NamedTemporaryFile(delete=False) as ca_cert:
ca_cert.write(base64.b64decode(cluster.master_auth.cluster_ca_certificate))
config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = True
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
config.ssl_ca_cert = ca_cert.name
client.Configuration.set_default(config)
# Setup K8 configs
api_instance = kubernetes.client.BatchV1Api(kubernetes.client.ApiClient(config))
container_image = get_first_success_build_from_list_builds(client)
name = id_generator()
body = kube_create_job_object(name, container_image,
env_vars={
"PROJECT" : json.dumps(manifest),
"BUCKET" : settings.GS_BUCKET_NAME,
})
try:
api_response = api_instance.create_namespaced_job("default", body, pretty=True)
print(api_response)
except ApiException as e:
print("Exception when calling BatchV1Api->create_namespaced_job: %s\n" % e)
return body
What causes this and how can I fix it? Am I supposed to set resource/limit varibles to a value and if so how can I do that inside my django job call?
It looks like you are running out of storage on the actual node itself. Since your job spec does not have a request for ephemeral storage, it is being scheduled on any node and in this case it appears like that particular node does not have enough storage available.
I'm not a Python expert, but looks like you should be able to do something like:
storage_size = SOME_VALUE
requests = {'ephemeral-storage': storage_size}
resources = client.V1ResourceRequirements(requests=requests)
container = client.V1Container(name=container_name, image=container_image, env=env_list, stdin=True, security_context=security, resources=resources)

Encountered an error while attempting to update latest block

I'm new to blockchain and i was just trying to deploy a simple smart contract to ropsten test net. I've used the smart contract code from https://github.com/t4sk/solidity-multi-sig-wallet. Also i'm using the account provided by truffle develop
My truffle-config.js:
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
ropsten: {
provider: () => new HDWalletProvider(mnemonic, `https://ropsten.infura.io/v3/${infuraKey}`),
network_id: 3, // Ropsten's id
gas: 5500000, // Ropsten has a lower block limit than mainnet
confirmations: 2, // # of confs to wait between deployments. (default: 0)
timeoutBlocks: 200, // # of blocks before a deployment times out (minimum/default: 50)
skipDryRun: true // Skip dry run before migrations? (default: false for public nets )
},
mocha: {
timeout: 100000
},
compilers: {
solc: {
version: "0.5.1", // Fetch exact version from solc-bin (default: truffle's version)
// docker: true, // Use "0.5.1" you've installed locally with docker (default: false)
// settings: { // See the solidity docs for advice about optimization and evmVersion
optimizer: {
enabled: false,
runs: 200
},
// evmVersion: "byzantium"
// }
}
},
Im using solidity: 0.5.1
But when i try to deploy it using truffle migrate --network ropsten im getting the following two errors
1.
This version of µWS is not compatible with your Node.js build:
Error: Cannot find module './uws_win32_x64_72.node'
Falling back to a NodeJS implementation; performance may be degraded.
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
> transaction hash: 0x673a9a02662595075c6f3aa4dc904d24203cb8e460a3e20a630869c5155cb78c
> Blocks: 2 Seconds: 53
> contract address: 0xde674E126884c8F7Ddd94B5013065596b81fEd6d
> block number: 12075322
> block timestamp: 1647065140
> account: 0xC10352218af6Ccbb574Fd0912adcc9Ac59C22950
> balance: 1.830076836873988898
> gas used: 175087 (0x2abef)
> gas price: 2.500000028 gwei
> value sent: 0 ETH
> total cost: 0.000437717504902436 ETH
Pausing for 2 confirmations...
-------------------------------
C:\Users\coolg\Desktop\hd_wallet\node_modules\request\request.js:848
var e = new Error('ETIMEDOUT')
^
Error: PollingBlockTracker - encountered an error while attempting to update latest block:
Error: ETIMEDOUT
at Timeout.<anonymous> (C:\Users\coolg\Desktop\hd_wallet\node_modules\request\request.js:848:19)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
at PollingBlockTracker._performSync (C:\Users\coolg\Desktop\hd_wallet\node_modules\eth-block-tracker\src\polling.js:51:24)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Also a transaction for the above smart contract address 0xde674E126884c8F7Ddd94B5013065596b81fEd6d is created on ropsten network.
Edit 1:
I've replaced the provider link with wss://ropsten.infura.io/v3/${infuraKey}` and the 2nd issue gets resolved but now it shows another error
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
> transaction hash: 0xb72aef24e5fc16395f1dc221965c4e2036b4d72babbe829f244f958d302baee5
> Blocks: 7 Seconds: 228
> contract address: 0xb81478b107D5B08B0F9ce8d0E404701a3D2292a0
> block number: 12076445
> block timestamp: 1647090364
> account: 0xC10352218af6Ccbb574Fd0912adcc9Ac59C22950
> balance: 1.828763684345799891
> gas used: 175087 (0x2abef)
> gas price: 2.500000007 gwei
> value sent: 0 ETH
> total cost: 0.000437717501225609 ETH
Pausing for 2 confirmations...
-------------------------------
> confirmation number: 3 (block: 12076452)
⠦ Saving migration to chain.
Exiting: Review successful transactions manually by checking the transaction hashes above on Etherscan.
Error: Transaction was not mined within 750 seconds, please make sure your transaction was properly sent. Be aware that it might still be mined!
It says the transaction might still be mined, so how will i know when my transaction gets mined? Also in the meantime can i call the fuctions/events of my smart contract that is deployed on the ropsten_eth https://ropsten.etherscan.io/address/0xb81478b107d5b08b0f9ce8d0e404701a3d2292a0
The problem here by mistake may be the address you are trying to reach. I searched for what the problem might be and found two threads where people have already described the problem and sort of found a solution. Most likely one of these options should help to solve the problem.
The first solution is here. The idea is to replace https with wss. There should be something like this:
testnet: {
provider: () => new HDWalletProvider(mnemonic, `wss://ropsten.infura.io/v3/${infuraKey}`),
...
}
Then I searched some more and found something like this. The author of the post says that the problem could be due to DNS or slow internet and suggests adding two parameters to the config:
testnet: {
...,
networkCheckTimeout: 10000,
timeoutBlocks: 200
}

Error: Deployment Failed while using Matic Test Network

I am getting started with Application development on Matic. And I am following the instruction as provided on the docs https://docs.matic.network/docs/develop/getting-started
But I faced problem while using truffle. After I run the command
truffle migrate --network matic
The error as follow:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
Starting migrations...
======================
> Network name: 'matic'
> Network id: 80001
> Block gas limit: 20000000 (0x1312d00)
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
Error: *** Deployment Failed ***
"Migrations" -- insufficient funds for gas * price + value.
at /usr/local/lib/node_modules/truffle/build/webpack:/packages/deployer/src/deployment.js:365:1
at process._tickCallback (internal/process/next_tick.js:68:7)
Truffle v5.1.55 (core: 5.1.55)
Node v10.19.0
The configuration file of truffle as follow:
const HDWalletProvider = require('truffle-hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.matic.today`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
}
}
}
It worked fine for the develop network using
truffle develop
Can someone tell me how to overcome error while using Matic Test Network?
Make sure you have enough MATIC for the transaction, you can get some from here and transfer them to the account[0]. I had issues deploying with truffle as all my tokens were at account[1] rather than account[0]
in my case it was ETH, ensuring i had tokens in the first account did the trick

Connecting cassandra-stress to AWS Keyspaces

I've provisions a Keyspace on AWS and in order to make sure it can achieve our desired performance I'm trying to run the cassandra-stress tool on it and compare it to other architectures we're experimenting with.
I managed to connect to it using the following cqlshrc:
[connection]
port = 9142
factory = cqlshlib.ssl.ssl_transport_factory
[ssl]
validate = true
certfile = /root/.cassandra/AmazonRootCA1.pem
And the following command (hoping that soon enough there will be Python3 support, the development was completed this February according to their Jira ticket):
cqlsh cassandra.eu-central-1.amazonaws.com 9142 -u "myuser-at-722222222222" -p "12/12ZmHmtD1klsDk9cgqt/XXXXXXXXxUz6Sy687z/U=" --ssl --cqlversion="3.4.4"
Surprisingly or not, when using the official AWS guides things tend to work.
So I went on and tried connecting the cassandra-stress tool (I have it inside a Docker container, I'd rather keep my OS Java free) to the same Keyspace.
First I converted the AWS AmazonRootCA1.pem into cassandra_truststore.jks using the following commands (explained here):
openssl x509 -outform der -in AmazonRootCA1.pem -out temp_file.der
keytool -import -alias cassandra -keystore cassandra_truststore.jks -file temp_file.der
Now when I'm trying to run the actual tool like this:
./cassandra-stress write -node cassandra.eu-central-1.amazonaws.com -port native=9142 thrift=9142 jmx=9142 -transport truststore=/root/.cassandra/cassandra_truststore.jks truststore-password=mypassword -mode native cql3 user="myuser-at-722222222222" password="12/12ZmHmtD1klsDk9cgqt/XXXXXXXXxUz6Sy687z/U="
I'm getting the following error:
******************** Stress Settings ********************
Command:
Type: write
Count: -1
No Warmup: false
Consistency Level: LOCAL_ONE
Target Uncertainty: 0.020
Minimum Uncertainty Measurements: 30
Maximum Uncertainty Measurements: 200
Key Size (bytes): 10
Counter Increment Distibution: add=fixed(1)
Rate:
Auto: true
Min Threads: 4
Max Threads: 1000
Population:
Sequence: 1..1000000
Order: ARBITRARY
Wrap: true
Insert:
Revisits: Uniform: min=1,max=1000000
Visits: Fixed: key=1
Row Population Ratio: Ratio: divisor=1.000000;delegate=Fixed: key=1
Batch Type: not batching
Columns:
Max Columns Per Key: 5
Column Names: [C0, C1, C2, C3, C4]
Comparator: AsciiType
Timestamp: null
Variable Column Count: false
Slice: false
Size Distribution: Fixed: key=34
Count Distribution: Fixed: key=5
Errors:
Ignore: false
Tries: 10
Log:
No Summary: false
No Settings: false
File: null
Interval Millis: 1000
Level: NORMAL
Mode:
API: JAVA_DRIVER_NATIVE
Connection Style: CQL_PREPARED
CQL Version: CQL3
Protocol Version: V4
Username: myuser-at-722222222222
Password: *suppressed*
Auth Provide Class: null
Max Pending Per Connection: 128
Connections Per Host: 8
Compression: NONE
Node:
Nodes: [cassandra.eu-central-1.amazonaws.com]
Is White List: false
Datacenter: null
Schema:
Keyspace: keyspace1
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Replication Strategy Pptions: {replication_factor=1}
Table Compression: null
Table Compaction Strategy: null
Table Compaction Strategy Options: {}
Transport:
factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=/root/.cassandra/cassandra_truststore.jks; truststore-password=mypassword; keystore=null; keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA;
Port:
Native Port: 9142
Thrift Port: 9142
JMX Port: 9142
Send To Daemon:
*not set*
Graph:
File: null
Revision: unknown
Title: null
Operation: WRITE
TokenRange:
Wrap: false
Split Factor: 1
java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra.eu-central-1.amazonaws.com/3.127.48.183:9142 (com.datastax.driver.core.exceptions.TransportException: [cassandra.eu-central-1.amazonaws.com/3.127.48.183] Channel has been closed))
at org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:220)
at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:228)
at org.apache.cassandra.stress.StressAction.run(StressAction.java:57)
at org.apache.cassandra.stress.Stress.run(Stress.java:143)
at org.apache.cassandra.stress.Stress.main(Stress.java:62)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra.eu-central-1.amazonaws.com/3.127.48.183:9142 (com.datastax.driver.core.exceptions.TransportException: [cassandra.eu-central-1.amazonaws.com/3.127.48.183] Channel has been closed))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:403)
at org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:160)
at org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:211)
... 6 more
I've tried changing some parameters such as the jks password etc. (Just in case I was wrong) but I got a different error message so it's probably not the case.
Did I miss something?
Try using TLP Stress instead.
tlp-stress run RandomPartitionAccess -d 10m --host cassandra.us-east-1.amazonaws.com --port 9142 --username alice --password fLyWYFlTCD5J2gzGAZ –ssl --max-requests 4000 --dc us-east-2 --threads 10
https://thelastpickle.com/tlp-stress/

Realm Object Server (ROS) on AWS: how to make a valid Websocket request?

Setup:
I am using iOS 11 and Xcode 9
Testing Realm for the first time and considering as an alternative to Firebase. Got a test server up and running in AWS EC based on the Tokyo public AMI provided. Dashboard works, adding users from Swift code works, even realms are created.
Problem:
Can not write, getting either
"Connection[1]: SSL handshake failed: Premature end of input" when I use "realms://" or "Connection[1]: Writing failed: End of input" when using "realm://" as a sync server URL. Tried googling the SSL error and did not find any matches.
From the time the tutorials on Realm website were made lots of the code has changed so I have had to improvise, perhaps there is some really clear mistake or perhaps it's my server config?
Here's my code.
var realm: Realm?
if let serverURL = URL(string: "http://13.112.252.130:9080"){
let usernameCredentials = SyncCredentials.usernamePassword(username: "raul", password: "abc123", register: false)
SyncUser.logIn(with: usernameCredentials,
server: serverURL) { user, error in
if let user = user {
print("User \(user) is admin: \(user.isAdmin)")
if let syncServerURL = URL(string: "realms://13.112.252.130:9080/~/addressBook") {
let config = Realm.Configuration(syncConfiguration: SyncConfiguration(user: user, realmURL: syncServerURL))
realm = try? Realm(configuration: config)
print("Successfully connected to realm!")
let contact = Contact()
contact.name = "John Doe"
contact.phone = "123456789"
contact.email = "john.doe#gmail.com"
if let realm = realm {
self.contactResults = realm.objects(Contact.self).sorted(byKeyPath: "name", ascending: true)
try? realm.write {
realm.add(contact)
print("wrote to realm!")
}
}
} else if let error = error {
print("Error: \(error.localizedDescription)")
}
}
}
}
Here's the error log with "realsms://" and this goes into a infinite loop:
2017-09-02 07:37:18.223475+0700 RealmAdressbook[7253:3703339] refreshPreferences: HangTracerEnabled: 1
2017-09-02 07:37:18.223532+0700 RealmAdressbook[7253:3703339] refreshPreferences: HangTracerDuration: 500
2017-09-02 07:37:18.223551+0700 RealmAdressbook[7253:3703339] refreshPreferences: ActivationLoggingEnabled: 0 ActivationLoggingTaskedOffByDA:0
User is admin: false
Successfully connected to realm!
2017-09-02 07:37:19.319628+0700 RealmAdressbook[7253:3703453] Sync: Opening Realm file: /var/mobile/Containers/Data/Application/2A52579D-4863-4FC3-88DA-31F2EC2549E5/Documents/realm-object-server/64e042b0-d753-4ebf-b5a4-de8f8f56142f/realms%3A%2F%2F13.112.252.130%3A9080%2F%7E%2FaddressBook
2017-09-02 07:37:19.320459+0700 RealmAdressbook[7253:3703453] Sync: Connection[1]: Session[1]: Starting session for '/var/mobile/Containers/Data/Application/2A52579D-4863-4FC3-88DA-31F2EC2549E5/Documents/realm-object-server/64e042b0-d753-4ebf-b5a4-de8f8f56142f/realms%3A%2F%2F13.112.252.130%3A9080%2F%7E%2FaddressBook'
2017-09-02 07:37:19.320591+0700 RealmAdressbook[7253:3703453] Sync: Connection[1]: Resolving '13.112.252.130:9080'
2017-09-02 07:37:19.322722+0700 RealmAdressbook[7253:3703453] Sync: Connection[1]: Connecting to endpoint '13.112.252.130:9080' (1/1)
2017-09-02 07:37:19.458271+0700 RealmAdressbook[7253:3703453] Sync: Connection[1]: Connected to endpoint '13.112.252.130:9080' (from xxxxxxxxxxxxx)
2017-09-02 07:37:19.597335+0700 RealmAdressbook[7253:3703453] Sync: Connection[1]: SSL handshake failed: Premature end of input
2017-09-02 07:37:19.597609+0700 RealmAdressbook[7253:3703453] Sync: Connection[1]: Connection closed due to error
And this is the log when I use "realm://", goes into loop as well:
2017-09-02 07:41:00.362705+0700 RealmAdressbook[7263:3705293] refreshPreferences: HangTracerEnabled: 1
2017-09-02 07:41:00.362762+0700 RealmAdressbook[7263:3705293] refreshPreferences: HangTracerDuration: 500
2017-09-02 07:41:00.362782+0700 RealmAdressbook[7263:3705293] refreshPreferences: ActivationLoggingEnabled: 0 ActivationLoggingTaskedOffByDA:0
User is admin: false
Successfully connected to realm!
wrote to realm!
2017-09-02 07:41:01.524168+0700 RealmAdressbook[7263:3705496] Sync: Opening Realm file: /var/mobile/Containers/Data/Application/36B5D609-1E8B-48FD-B20A-F5DF4EB21384/Documents/realm-object-server/64e042b0-d753-4ebf-b5a4-de8f8f56142f/realm%3A%2F%2F13.112.252.130%3A9080%2F%7E%2FaddressBook
2017-09-02 07:41:01.525491+0700 RealmAdressbook[7263:3705496] Sync: Connection[1]: Session[1]: Starting session for '/var/mobile/Containers/Data/Application/36B5D609-1E8B-48FD-B20A-F5DF4EB21384/Documents/realm-object-server/64e042b0-d753-4ebf-b5a4-de8f8f56142f/realm%3A%2F%2F13.112.252.130%3A9080%2F%7E%2FaddressBook'
2017-09-02 07:41:01.526011+0700 RealmAdressbook[7263:3705496] Sync: Connection[1]: Resolving '13.112.252.130:9080'
2017-09-02 07:41:01.527816+0700 RealmAdressbook[7263:3705496] Sync: Connection[1]: Connecting to endpoint '13.112.252.130:9080' (1/1)
2017-09-02 07:41:01.663245+0700 RealmAdressbook[7263:3705496] Sync: Connection[1]: Connected to endpoint '13.112.252.130:9080' (from '192.168.1.4:59862')
2017-09-02 07:41:01.819181+0700 RealmAdressbook[7263:3705496] Sync: Connection[1]: Writing failed: End of input
2017-09-02 07:41:01.819320+0700 RealmAdressbook[7263:3705496] Sync: Connection[1]: Connection closed due to
error
Logs from the server it seems that something is wrong with my request headers, how to fix this?
proxy: [syncProxy] internal error: Error: socket hang up at createHangUpError (httpclient.js:253:15) at Socket.socketOnEnd (httpclient.js:345:23) at emitNone (events.js:91:20) at Socket.emit (events.js:185:7) at endReadableNT (streamreadable.js:974:12) at combinedTickCallback (internal/process/nexttick.js:80:11) at process.tickCallback (internal/process/nexttick.js:104:9).
1:51:33 AMinfo
sync: HTTP Connection[714]: Connection closed due to error
1:51:33 AMerror
sync: HTTP Connection[714]: Check the proxy configuration and make sure that the HTTP request is a valid Websocket request. The header values are case sensitive
1:51:33 AMerror
sync: HTTP Connection[714]: The HTTP request with the error is: GET /realm-object-server HTTP/1.1
connection: Upgrade
host: 13.112.252.130
sec-websocket-key: 7FDPgyFxq/GT1tKfIMJNcg==
sec-websocket-protocol: io.realm.sync.19
sec-websocket-version: 13
upgrade: websocket
x-realm-access-token: eyJhY2Nlc3MiOlsiZG93bmxvYWQiLCJ1cGxvYWQiLCJtYW5hZ2UiXSwiYXBwX2lkIjoiY29tLmJhbWJhbWxhYnMuUmVhbG1BZHJlc3Nib29rIiwiZXhwaXJlcyI6MTUwNDM3ODg4NiwiaWQiOiJlMjI0YTM5NmU4YTI0OWU1ODlhNWQ4OWM0ODczOTMzOCIsImlkZW50aXR5IjoiNjE1ZWUxMjU0MDA4ZDA5MWJiYTc1MjU4YTAyZWViZjYiLCJwYXRoIjoiLzYxNWVlMTI1NDAwOGQwOTFiYmE3NTI1OGEwMmVlYmY2L215UmVhbG0ifQ==:hyX8GtVHMIBho3Zw6pZfp9Gnl6O0C0Rl73V0EdX/a4ZWXMxcySFZmWbs0CxmjnpZUDNnFDK3PpXspN1YnGu2c5ByuRIpgpT7hkzwAil2EQzFeKFycYXwTbsp3a6X9npHETjxUfe9QWIIA5drz3VRPUI+0Tj+qspjbyPBcMhL6ZH3A8ubZHOIpjJpxRWGZbghdznf0g71Ta0SDyCYT4GB+fHuddzUH7RZgLkzBfoyIdJyfGccwVi1Qe/c0GTPzkH12TSyzHSwx9PnGadl1vBRuPci6fs+TE03rx6Gy7v73I37JpVVsiPm1omMG7FBdi60iQYQvItiycnle/rvb6+u3w==
x-realm-path: /615ee1254008d091bba75258a02eebf6/myRealm
1:51:33 AMerror
sync: HTTP Connection[714]: There must be a header of the form 'Sec-WebSocket-Protocol: io.realm.protocol'
1:51:33 AMinfo
sync: HTTP Connection[714]: Received: Sync HTTP request(protocol_version=-1)
The reason for the failure in the SSL case and in the non-SSL case are probably different.
For the SSL case, are you sure that you want to use port 9080?
Usually Realm uses 9443.
For the non-SSL case, the problem is that the headers apparently have been rewritten by an intermediate proxy. The request received by the server is different from expected. We can fix this at Realm. The only thing you can do right now is to change proxy. Thanks for reporting this.
Edited answer: For the non-SSL case, the reason that the headers are not recognized by the server is that you use an older version of the server.
It seems that you have upgraded the client without upgrading the server.
Try using the latest version of the server.