how can we run Neptune graph database on docker - amazon-web-services

how can we run Neptune graph database on docker
Since Neptune DB has been productized recently it is not available on Localstack can someone guide me how to deploy AWS Neptune DB Service into docker container

You don't deploy neptune, you deploy a client application which uses an appropriate client library to access neptune. The neptune software/hardware is managed by AWS and you can't access it except via API.

My guess is that you're attempting to create a local Neptune compatible docker container (i.e. some docker container with a compatible API). This would be similar to using minios when performing local integration testing with S3. If this is indeed what you're in search of I'd recommend using tinkerpop's gremlin-server image. This should get the job done for you since Neptune uses Gremlin for its query language.

For now, I found only one way. It is Pro version of Localstack. It contains Neptune DB. https://localstack.cloud/features/ Unfortunately free version of test container does not support DB interface. =(

Neptune is a fully managed graph database, and not a binary that can be independently deployed in your personal containers or infrastructure. You can run your client application in your own custom docker containers, and setup your network such that the container makes requests to the managed Neptune cluster that you have created.
Hope this helps.

Related

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops into local kubernetes cluster (using kubeadm)?

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.

Migrating on premise web application to AWS ec2

Can some one please advise the steps required for migrating a web application which is currently running on tomcat server at onpremise to AWS ec2 instance. I understand this is not a straight forward and requires some detailed process.
The code is wrriten in Java and database used as oracle.
So it would be helpfull if someone can suggest me any relavent document or any website which gives some demo to refer me and proceed with this scenario.
If it's a personal project then I would recommend Lightsail as the simplest way to deploy existing Java application.
For a database a small instance of MySQL or if relational database is not needed then a document database like DynamoDB. https://aws.amazon.com/products/databases/?nc2=h_m1
There are multiple choices one how to migrate a Java application to AWS.
You could potentially use existing AWS services like:
Lightsail - https://aws.amazon.com/lightsail/
Beanstock - https://aws.amazon.com/elasticbeanstalk/
or
EC2 instance and install Tomcat manually
Use ECS with Docker https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/?nc2=type_a
As for Database solution Oracle is an option but quite expensive one.
When moving to AWS it's better to use one of the RDS managed databases like MySQL, Postgress or more expensive like Aurora.
In order to propose an architecture some details would be needed on predicted load, the size of the application and volume of data. Is the product regional or global, are there any additional issues that need to be addressed while moving to a cloud (performance, availability etc), how users are authenticated (are any other services needed).

AWS Docker Container for Local Development

I'm using AWS Dynamo DB, Lambda, ElastichSearch, ElasticCache(Redis). I want to bring all these services offline for local development. I wonder's is there a Docker container for all these services?
Perhaps! There's a (set of) Docker containers that claim they provide local implementations of popular AWS services: localstack.
Edit: For lambda specific things there's also Docker Lambda!
I've never actually used these Docker containers, but have wanted to. (But my development needs try to use commodity services instead of vendor specific. So MongoDB instead of DynoDB, and sure we might use ElastiCache to run our Redis cluster, but that just means in local development we can use Redis directly. Having said that, that's not everyone's cup of tea / maybe not possible for some things..)
We use docker for most AWS Services for local development except for AWS Lambda.
We use the service containers as below:
MySQL for RDS MySQL
Redis for ElastiCache
ElasticSearch for AWS ElasticSearch
fake-s3 for S3
ActiveMQ for mocking SQS and SNS topics (The implementation for SNS topics is a bit ugly, but abstracted out in one place with some if-else statements)
Most of our services make use docker-compose to start the dependent containers. We've included these containers on our build server too to run our integration tests.
In addition, most of the containers we are using needed some modifications to the original Docker file. So we had to push our changes to our own Docker repository, which we maintain using ECS.
For Lambda, we do not use a docker container as we start our own HTTP server locally to test and invoke the lambda function.
Been using this setup for over a year without any issues. You may also want to refer to this blog from IFTTT to get some more ideas around DNS resolution and how to make this effort better.

Kubernetes, Eucalyptus and A.W.S

I did not find any topic talking about eucalyptus and Kubernetes. And I find that quite weird because eucalyptus allow you to create an hybrid cloud with a private cloud A.W.S. compatible for the S3, EBS and EC2. And, so I was thinking with just Eucalytpus and Kubernetes, you can easily create an awesome hybrid cloud.
Does some experiments Kubernetes with Eucalyptus? If yes, how did you setup it? And, how works your private and public cloud (working together or are they independent)?
Eucalyptus does not yet officially support AWS ECS service, but there has been some work done on this area recently. For that you will have to install Eucalyptus from source using a custom git branch.
The link below explains how to get AWS ECS like service on Eucalyptus,
http://jeevanullas.in/blog/aws-ec2-container-service-api-in-eucalyptus/
We installed Kubernetes on top of Eucalyptus using Rancher. We had to modify the AWS Docker Machine driver to make it work automatically, but a few months ago the ability to use custom regions with Docker Machine so it should be in Rancher soon. You can also install Rancher hosts manually using the bootstrap script.
For the most part, it works quite well and the bonus with using Rancher is you get Swarm, Mesos, and Cattle orchestration engines which increases the types of container deployments. New versions of Rancher to be released will support EBS volumes and we plan to experiment that for container storage.
We haven't tried connecting private and public, but in theory it should work as long as the UDP ports are open required for Rancher networking. But I'm not even sure this is how container orchestration is designed to work multi-region so try with caution.

AWS Elastic Beanstalk - Using Mongodb instead of RDS using Python and Django environment

I've been following the official Amazon documentation on deplaying to the Elastic Bean Stalk.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html
and the customization environment
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format
however, I am stuck. I do not want to use the built in RDS database I want to use mongodb but have my django/python application scale as a RESTful frontend or rather API endpoint for my users.
Currently I am running one EC2 instance to test out my django application.
Some problems that I have with the Elastic Bean:
1. I cannot figure out how to run commands such as
pip install git+https://github.com/django-nonrel/django#nonrel-1.5
Since I cannot install the device mongo driver for use by django I cannot run my mongodb commands.
I was wondering if I am just skipping over some concepts or just not understanding how deploying on the beanstalk works. I can see that beanstalk just launches EC2 instances and possibly need to write custom scripts or something I don't know.
I've searched around but I don't exactly know what to ask in regards to this. Top results of google are always Amazon documents which are less than helpful in customization outside of their RDS environment. I know that Django traditionally uses RDS environments but again I don't want to use those as they are not flexible enough for the web application I am writing.
You can create a customize AMI to your specific needs the steps are outline in the AWS documentation below. Basically you would create a custom AMI with the packages needed to host your application and then update the Beanstalk config to use your customize AMI.
Using Custom AMIs