I have been developing a robust distributed file system to be run on tcp/udp network.
I am writing the application in C++.
Currently I am looking for test framework that I can use for basic testing of the DFS.
I am assuming I have to write some sort of plugin for the test framework.
As, I don't have bunch of computing power(have two machines). Also, would like to know ideas on whether to use some sort of simulator or buy some hardware for testing. Currently I am thinking about putting multiple VM's on my machines to create my test environment.
Test framework should be agnostic to network protocol being used. I am assuming most are, but not sure.
Any addition suggestions regarding test environment/framework would be appreciated.
Using MIT's Star Cluster toolkit you can launch a cluster. Amazon provides free tier service, you can use that.
Related
Can we deploy QlikSense/QlikView on serverless architecture?
Currently using Monolithic architecture, any other way to move on to serverless?
While I am not familiar with Qlik's products, it is unlikely they would be suitable for serverless architecture.
Companies generally offer the products either as:
Downloadable products that you run on your own server (which could be a virtual server in the cloud), or
Software-as-a-Service, where you access their website directly and no server is required (eg Salesforce)
"Serverless architecture" is a design decision that can be made when designing a software product. It means the the application is broken-down into small components ('microservices') that can be run on services like AWS Lambda, with no actual server.
However, such architecture would normally only be used for your own applications that you create. If another company has designed their system to be 'serverless', then they would normally run it on a cloud system (eg AWS) and offer it to users as Software-as-a-Service. It would be highly unusual to have a 'download' product that runs on a serverless architecture.
I notice that Qlik has product offerings that run on AWS (AWS Marketplace: Qlik), which runs on an Amazon EC2 instance, rather than serverless.
If you look into the Qlik Core product then yes Qlik can be deployed on an elastic containerised enviroment. But then as I understand it you don't get the standard objects and visualisations, user management etc. So you have to code your own stuff that ties into the Qlik Data Analytics Engine via apis.
From https://core.qlik.com/why-qlik-core/
So, how do you get it? Licensing info here but let’s talk components
Linux-based Associative Engine – provided as a Docker image with built-in support for Amazon Web Services, Microsoft Azure and Google Cloud Platform
Supporting APIs – these ingest your data into the Qlik Associative Engine through connectors
Supporting Open Source Libraries – these various libraries by Qlik expose the engine to help you build solutions faster
It’s all language agnostic but JavaScript lovers will find it easier to work with given the number of our open source tools available in JavaScript. Other top languages and tools used include R, Go, Shell, C#, Python, Java and D3. Qlik Core can also be managed with the orchestration tool of your choice for implementing, scaling and managing containerized applications.
It really depends on what exactly are you building. As The Budac mentioned you can use Qlik Core
If you just want invoke Qlik API (for example some automation jobs) then serverless functions are making sense.
Qlik Sense (both Enterprise and Kubernetes versions) expose a lot more different API which can be called from technically everywhere.
QlikView on the other hand is more ... conservative. QV is an older software and the API/integrations are more limited. For example: to call the Management API you have to be on the same domain as QV. Personally I've only connected to QV Management API only with C# and pretty sure you cant use JS/Node
My company is currently evaluating hyperledger(fabric) and we're using it for our POC. It looks very promising and we're targeting rolling out to production in next few months.
We're targeting AWS as our production environment.
However, we're struggling to find good tutorial/practices/recommendations about operating hyperledger network in such environment.
I'm aware that Cello is aiming to solve/ease deploying/monitoring hyperledger network but i also read that its not production ready yet. Question is, should we even consider looking at Cello at this point?
If not, what are our alternatives? Docker swarm, kubernetes?
I also didn't find information about recommended instance types. I understand this is application and AWS specific but what are the minimal system requirements
(memory&CPU&network) for example for 'peer' node (our application is not network intensive, nor a lot of transactions will be submitted per hour/day, only few of them per day).
Another question is where to create those instances on AWS from geographical&decentralization point of view. Does it make sense all of them to be created in same region? Or, we must create instances running in different regions?
Tnx a lot.
Igor.
yes, look at Cello.. if nothing else it will help you see the aws deployment model.
really nothing special..
design the desired system, peers, orderer, gateways, etc..
then decide who many ec2 instance u need to support that.
as for WHERE (region).. depends on where the connecting application is and what kind of fault tolerance you need for your business model.
one of the businesses I am working with wants a minimum of 99.99999 % availability. so, multi-region is critical. its just another ec2 instance with sockets open from different hosts..
aws doesn't provide much in terms of support for hyperledger. they have some templates which allow you to setup the VMs initially, but that's stuff you can do yourself as well.
you are right, the documentation is very light and most of the time confusing. I got to the point where I can start from scratch with a brand new VM and got everything ready and deploy my own network definition and chaincode and have the scripts to do that.
IBM cloud has much better support for hyperledger however. you can design your network visually, you can download your connection profiles, deploy and instantiate chaincode, create and join channels, handle certificates, pretty much everything you need to run and support such a network. It's light years ahead of AWS. They even have a full CI / CD pipepline that you could replicate for your own project. if you look at their marbles demo, you'll see what i mean.
Cello is definitely worth looking at, with the caveat that it's incubation meaning, not real yet, not production ready and not really useful until it becomes a fully fledged product.
I've a web application with Nodejs server and HTML client.
I have server integrated with many c++ algorithms. To reduce server loading and for high performance, I wanna distribute my algorithms in parallel from server.
I'm a newbie to Hadoop and its Map/Reduce programming concept.
Question:
Shall I use clustering for this architecture?
Is this happens with map reducing?
You are mixing up:
Clustering, as in data analysis ("cluster analysis", but that is hard to pronounce)
Clustering, as in load balancing (this would be easy to pronounce and precisey but not as cool as "clustering")
Make sure to distinguish these two.
I have created a django app that contains c++ for some of the views as well as a java library. How would I deploy this app? What kind of hosting service allows for multiple languages? I have looked at EC2, GAE, and several platforms (like heroku) but I can't seem to find a definitive solution.
I have never deployed anything to the web so a simple explanation would be much appreciated.
PaaS stuff is probably not your best bet. If you want the scalability and associated buzzwords(muh 99.9999999999% availability because my servers are hosted in a parallel dimension without electrical storms, power outages, hurricanes, earthquakes, or nuclear holocausts) that comes with hosting your application on a huge web company's platform, check out IaaS(Infrastructure as a service) systems like Google's Compute Engine or AWS. With these you just get a virtual server (or servers), running your Linux distro of choice, and you can install and run whatever you please on them without being constrained to a specific platform like App Engine or Heroku(where you have to basically write your app to specifically run on that platform). If you plan on consuming a ton of bandwidth/resources from the get-go, you will almost certainly get a better deal using a dedicated server(s) from a small company.
Interested in what specifically you are executing C++ for in a Django view. Image/video processing?
Well. Deployment is not really something where a simple explanation helps much.
First I would check what the requirements to the operating system are (compilers, dependencies,…). That will maybe reduce the options quickly.
I guess that with a setup containing C++ & Java artifacts, the usual PaaS (GaE, Heroku,…) offerings will not be sufficient because they define the stack. And a mixture of Python/C++/Java is rather uncommon I'd say.
Choosing an IaaS offering (EC2, …) may be an option. There you can run your whole self-defined stack and have the possibility of easier scaling.
Hosting the application on your own server(s) is also always possible. Check your data protection regulations to find out if it's not even a requirement.
There are a lot of ways to get the Django application to run. The Django documentation has some information about deployment. If you have certain special requirements, uwsgi may be a good application server.
You may also want a web server in front of the application. Possibilities range from using uwsgi's built-in http server or using e.g. Nginx with uwsgi.
All in all every component of the whole "deployment" has hundereds of bells and whistels and it's not easy to give advice without knowing specific requirements and properties of the system itself. You'll also probably need a database you have to deploy.
But before deploying it to the web, it's also important to have a solid build process to assemble all the parts. And not only on the development machine. With three languages involved this should be the first step solve. If it easily and automagically deploys in a development environment, moving it to a server is easier.
I am currently writing a client-server app for the iOS platform. The client is written in Obj-C, and the server uses C++ on OSX11.9. Since I intend to run the server software on an Ubuntu dedicated server, I am trying my best to keep the serverside code portable.
To store data about users and user-game-relations I intend to use an SQL database (most likely MySQL or possibly PostgreSQL since I'm familiar with those). I know that it is possible to read from/write to the database through a filedescriptor just like I do in my TCP module, but I wish to utilize a higher-level SQL communications API to make the programming process quicker.
Can anyone recommend me a good open source/free SQL API for *NIX C++? Any help would be appreciated. Thanks in advance!
You have several options here:
Use native database SDK. They are usually distributed along with the database installation or as separate downloads/packets. The upside is you can get maximum speed out of it. Downside is that you'll be limited by your initial choice - no switching afterwards without rewriting part of application.
Use a C++ ORM (example: ODB). This gives you DB independence along with some tasty features, at the cost of slightly reduced speed.
unixODBC supports both MySQL and PostgreSQL. Take a look at it.