Short description
I am looking for session manager for Jetty server which can store data couchbase cluster. I want to have advantage of couchbase server that if one server goes down, application doesn't get affected.
Long description
Currently I am using couchbase cluster as session store for Jetty Server in the following way
Installed this library https://github.com/yyuu/jetty-nosql-memcached
Have a default bucket on Couchbase with no password. Which listens on 1111 port and communicated with memcached protocol.
Configured above library in jetty. So this Jetty session store talks memcached protocol to one of the node of couchbase cluster.
This setup works well but there are few limitations.
I can not use non-default bucket to store session.
I can not have bucket password.
If one server(which I have configured in jetty.xml) of cluster goes down, session will stop working.
I am more concerned about point 3. So is there any session manager which can fit into these requirements.
If you are using Couchbase with Couchbase Bucket you will have automatic partitioning and replication of the session, so when a node goes down the cluster will failover this node and your application you continue to work transparently.
The issue you are describing in 1 & 2 are not related to Couchbase but to the implementation fo the "jetty-nosql-memcached" project. May be you can contribute to this project and add port change and SASL support.
Related
Guys I am trying to implement a 3-tier architecture to host a web app on aws.
The requirements given to me are as follows.
The app will leverage a 3-tier architecture:
A Web Server that will be running on S3
An application tier running on ECS Cluster on Fargate or a fleet of EC2s with ASG (your choice)
A data tier running on RDS Aurora PostgreSQL latest supported version
I understand perfectly what to do on the 2nd and 3rd instructions for the App and Database tier.
What I don't get is the “web server running on s3” . Is it possible to have a web server on S3?
What I know is, I can have a web server running on EC2.
Please, I need some explanation here.
Yes and no, S3 is a static file host, which means you have these HTML, CSS, and JS files where all you want to do is to send these files to the browser, then absolutely, yes. S3 can be used as a file serving service, https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
However, when you have the case where your website is doing some real-time HTML generation, something like SSR (Server Side Rendering), S3 won't cut it. S3 does not process the code in any way and only directly sends the files as-is to the frontend. In which case, you need a more traditional server on EC2/ECS/EKS.
I currently have a hosted (GCP) microservice environment that is under development. When working on a service I currently run the environment locally. I run all the services that the service I am working on needs to communicate to.
This provides a bad developer experience because:
I have to spin up every service; there can be a lot
running so many services can use a lot of my system resources
If any of those services need a DB, I have to set that up too
I'm looking for a soution to this. Idealy, I will run just the single service locally and connect to the rest of the services in the hosted environment.
Do any of the popular service meshes offer this as an option? I'm looking at Istio and Kuma primarily. Are there any alternatives solutions that come to mind?
For remote development/debugging I would suggest to have a look at Telepresence.
https://www.telepresence.io/
It is even recommended by Kubernetes docs:
Using telepresence allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/
Istio on the other hand enables you to do shadow deployment and canary or blue/green deployment. You can e.g. run a service and send certain user (based on the header) to a new version. You can mirror traffic to a service or shift traffic from 0 to 100 % step by step. I'd say it's more for testing your new service under load or gradually releasing a new version.
Using AWS Aurora database service - you can configure master-slave replication and slave autoscaling (e.g. if slave CPU is higher than 75 percent - create a second slave).
Newly created database has a new endpoint (host) which is not registered by django yet.
What would be the best approach to firstly discover the newly created database and add it to a running django application?
I am thinking about pinging every X seconds using, lets say, aws cli and checking how many slaves are there. But the problem with this is if a slave is destroyed by an autoscaling group - my django application would start erroring, so an appropriate handling is also required...
You shouldn't be configuring each read replica's endpoint with django. You should configure it to use the reader endpoint provided by Aurora, which will load-balance the requests across all read replicas in the cluster. Then when a new read replica is added to the cluster django will automatically be using it.
I'm working on Apache Spark application which I submit to AWS EMR cluster from Airflow task.
In Spark application logic I need to read files from AWS S3 and information from AWS RDS. For example, in order to connect to AWS RDS on PostgreSQL from Spark application, I need to provide the username/password for the database.
Right now I'm looking for the best and secure way in order to keep these credentials in the safe place and provide them as parameters to my Spark application. Please suggest where to store these credentials in order to keep the system secured - as env vars, somewhere in Airflow or where?
In Airflow you can create Variables to store this information. Variables can be listed, created, updated and deleted from the UI (Admin -> Variables). You can then access them from your code as follows:
from airflow.models import Variable
foo = Variable.get("foo")
Airflow has got us covered beautifully on credentials-management front by offering Connection SQLAlchemy model that can be accessed from WebUI (where passwords still remain hidden)
You can control the salt that Airflow uses to encrypt passwords while storing Connection-details in it's backend meta-db.
It also provides you extra param for storing unstructured / client-specific stuff such as {"use_beeline": true} config for Hiveserver2
In addition to WebUI, you can also edit Connections via CLI (which is true for pretty much every feature of Airflow)
Finally if your use-case involves dynamically creating / deleting a Connection, that is also possible by exploiting the underlying SQLAlchemy Session. You can see implementation details from cli.py
Note that Airflow treats all Connections equal irrespective of their type (type is just a hint for the end-user). Airflow distinguishes them on the basis of conn_id only
Sorry I'm new to web server. I want to deploy a cloud server for user data:
User can login using web, with verification code sent to user's phone.
User can manipulate his data (add/modify/remove) when login.
Android/iPhone client can manipulate user data when login.
Server should have a database for storage, SQLLite or others.
It would be good to use Amazon/Ali-cloud cloud service, provided it can speed up my deployment. I'm not sure if I need run into blobs such as H5, PHP/JSP, node.js or others. Can you provide a guide for me, web link or book?
And, what's the most popular programming interface between Android/IOS app and cloud server? http post/get or other wrapper ?
Surely you can speed up your deployment using Amazon Web Services. This is my recommendation:
For Webserver,
Amazon EC2: Launch an instance where you can install Apache/Nginx
here. You will need a RDS instance running parallel with your server
which will lower your need on server CPU/Mem, but will cost also.
For Database, you can have many approach ways here:
Amazon RDS: Launch an instance where you host your Database
(mysql/...). This one will provide you with Database Name, Hostname,
Users, ... which you can use to connect with your webserver in EC2.
Your Android/IOS application can use RDS information for the database
connection.
Amazon DynamoDB: Fast, Flexible for NoSQL (wonder if you want to use
traditional database or NoSQL?): https://aws.amazon.com/amplify/
For Mobile/Website access control,
AWS Cognito: Great for user-accounts, designed for real-time data
model: https://aws.amazon.com/cognito/?nc1=f_ls
For serverless if you want to GET/PUT API on your webserver for
easier,
AWS Lambda: https://aws.amazon.com/lambda/?nc1=f_ls
Taking into account that you are just starting with your application, I would suggest going with serverless architecture with AWS Lambda running your business logic.
Key benefits:
No server management = spend time on building your application vs on maintaining infrastructure
Flexible scaling = scale based on what you really need
Pay for value = don't pay for resources that you don't need
Automated high availability = serverless provides built-in availability and fault tolerance
To learn more on serverless, you may want to check Building Serverless Web Applications - 2017 AWS Online Tech Talks.
Now when it comes to going deep, I would suggest checking online trainings available from acloud.guru, cloud academy, udemy or linuxacademy for serverless and also for the development language you want to use (Node.js is often used for such scenarios).