C++ runtime API - c++

I want to create an application that, when executed, has runtime functions that are accessible by other applications.
For example, a C++ application that stores values in files and retrieves this information. While this application is running, any other C++ applications could access it's save and retrieve functionality to save and retrieve data, but it should have no other connection to this system.

Sounds like a simple job for web services, or a remote database, or even an LDAP server.
Store and retrieve are operations common to all of these.
If the goal is to learn some specific technology, then ask a more specific question. Otherwise, don't reinvent any wheels. There are plenty of things out there for store and retrieve.
One of the simplest "store and retrieve" APIs I know of is Berkeley DB or Sleepycat.
We built a giant, clustered, simple key based database for a major telecom company using LDAP on top of Berkeley DB (aka Sleepycat). All open-source software and commodity hardware and it supports mission critical operations for millions of customers.
A more modern rendition of this might use memcached as well.
If you go HTTP based, you can use something simple as libcurl against an Apache web server to implement "RESTful" services with GET and PUT commands.
If you run it locally (same server), and access via localhost (127.0.0.1) then there is very little latency in the TCP stack, and it amounts to little more than memcpys at the kernel level.

simple message passing would do, say, JSON over ØMQ, or i.e. all in all, msgpack-rpc or protobuf-remote or Cap'n Proto RPC

Related

How do we dump data into informatica?

I have to dump data from various sources to Informatica. Sources are some manual files which would be dumped via a SFTP server, some via APIs, some with direct DB connection. In that case, how do we connect the files from the server? via some kind of connection to the SFTP server, API endpoint connection, putting DB connection via DB endpoint? In these cases, how do we authenticate? i dont want to use the username/password, is there a way to use Active Directory connect?
How does informatica authenticate if the source of the files are genuine?
If you mean the source itself, then you need to decide if the source is genuine before you create a connection to it
If you mean how to secure the connection, then that is a property of the source and defined by the owner of the source. Informatica can use almost any industry-standard secure protocols and authentication methods
Any way to scan for malicious files?
Informatica can implement any business rules you want to define to determine if the data in a file is malicious
If you are asking is there a "magic button" you can press that will tell you if a file is malicious, then the answer is no
Answer to Question about PocketETL
Once you've identified all the functionality required to implement your overall architecture, you have 2 basic options for how you satisfy these requirements:
Identify a single tool that covers as much of the functionality as possible and then fill in the gaps with other tools
simplest to implement
should "just work"
unlikely to be "best of breed" in all areas
unlikely to the cheapest solution
Implement point solutions for each area of functionality
likely to be a better solution, for you, in each area
may be cheaper
but you have to get all the components working together, which is unlikely to be trivial
you need to know how to implement and configure multiple products, not just one
So you could use Informatica to do everything or you could use PocketETL to do the first piece of data movement and then other tools to implement the rest of data pipeline

Uploading large files to server

The project I'm working on logs data on distributed devices that needs to be joined in a single database on a remote server.
The logs cannot be streamed as they are recorded (network may not be available etc) so they must be sent in bulky 0.5-1GB text based csv files occasionally.
As far as I understand this means having a web service receive the data in form of post requests is out of the question because of file sizes.
So far I've come up with this approach: Use some file transfer protocol (ftp or similar) to upload files from device to server. Devices would have to figure out a unique filename to do this with. Have the server periodically check for new files, process them by committing them to the database and deleting them afterwards.
It seems like a very naive way to go about it, but simple to implement.
However, I want to avoid any pitfalls before I implement any specifics. Is this approach scaleable (more devices, larger files)? Implementation will either be done using a private/company owned server or a cloud service (Azure for instance) - will it work for different platforms?
You could actually do this through web/http as well, after setting a higher value for post request in the web server (post_max_size andupload_max_filesize for PHP). This will allow devices to interact regardless of platform. Should't be too hard to make a POST request server from any device. A simple cURL request could get this job done.
FTP is also possible. Or SCP, to make it safer.
Either way, I think this does need some application on the server to be able to fetch and manage these files using a database. Perhaps a small web application? ;)
As for the unique name, you could use a combination of the device's unique ID/name along with current unix time. You could even hash this (md5/sh1) afterwards if you like.

SQL API for *NIX C++

I am currently writing a client-server app for the iOS platform. The client is written in Obj-C, and the server uses C++ on OSX11.9. Since I intend to run the server software on an Ubuntu dedicated server, I am trying my best to keep the serverside code portable.
To store data about users and user-game-relations I intend to use an SQL database (most likely MySQL or possibly PostgreSQL since I'm familiar with those). I know that it is possible to read from/write to the database through a filedescriptor just like I do in my TCP module, but I wish to utilize a higher-level SQL communications API to make the programming process quicker.
Can anyone recommend me a good open source/free SQL API for *NIX C++? Any help would be appreciated. Thanks in advance!
You have several options here:
Use native database SDK. They are usually distributed along with the database installation or as separate downloads/packets. The upside is you can get maximum speed out of it. Downside is that you'll be limited by your initial choice - no switching afterwards without rewriting part of application.
Use a C++ ORM (example: ODB). This gives you DB independence along with some tasty features, at the cost of slightly reduced speed.
unixODBC supports both MySQL and PostgreSQL. Take a look at it.

How to 'web enable' a legacy C++ application

I am working on a system that splits users by organization. Each user belongs to an organization. Each organization stores its data in its own database which resides on a database server machine. A db server may manage databases for 1 or more organizations.
The existing (legacy) system assumes there is only one organization, however I want to 'scale' the application by running an 'instance' of it (tied to one organization), and run several instances on the server machine (i.e. run multiple instances of the 'single organization' application - one instance for each organization).
I will provide a RESTful API for each instance that is running on the server, so that a thin client can be used to access the services provided by the instance running on the server machine.
Here is a simple schematic that demonstrates the relationships:
Server 1 -> N database (each
organization has one database)
organization 1 -> N users
My question relates to how to 'direct' RESTful requests from a client, to the appropriate instance that is handling requests from users for that organization.
More specifically, when I receive a RESTful request, it will be from a user (who belongs to an organization), how (or indeed, what is the best way) to 'route' the request to the appropriate application instance running on the server?
From what I can gather, this is essentially a sharding problem. Regardless of how you split the instances at a hardware level (using VMs, multiple servers, all on one powerful server, etc), you need a central registry and brokering layer in your overall architecture that maps given users to the correct destination instance per request.
There are many ways to implement this of course, so just choose one that you know and is fast, and will scale, as all requests will come through it. I would suggest a lightweight stateless web application backed by a simple read only database that does the appropriate client identifier -> instance mapping, which you would load into memory/cache. To add flexibility on hardware and instance location, use (assuming Java) JNDI to store the hardware/port/etc information for each instance, and in your identifier mapping map the client identifier to the appropriate JNDI lookup key.
Letting the public API only specify the user sounds a little fragile to me. I would change the public API so that requests specify organization as well as user, and then have something trivial server-side that maps organizations to instances (eg. organization foo -> instance listening on port 7331).
That is a very tough question indeed; simply because there are many possible answers, and which one is the best can only be determined by you and your environment.
I would write an apache module in C++ to do that. Using this book, I managed to start writing very efficient modules.
To be able to give you more solutions (maybe just setting up a Squid proxy?), you'll need to specify how you will be able to determine to which server you need to redirect the client. If you can do it by IPs, though a GET param, though a POST XML param (like SOAP). Etc.
As the other answer says there are many ways to approach this issue. Lets assume that you DON'T have access to legacy software source code, which means you cannot modify it to listen on different ports for different instances.
Writing Apache module seems VERY extreme to solve this issue (and as someone who actually just finished writing a production apache module, I suggest avoiding it unless you are making serious money).
The approach can be as esoteric as you like. For instance if your legacy software runs on normal Intel architecture and you have the hardware capacity there are VM solutions, where you should be able to create a thin virtual machine, one running a single instance of the software and a multiplexer to tie them all.
If on the other hand you are running something like HPUX well :-) there are other approaches. How about you give a bit more detail?
Ahmed.

Best approach(es) or technolog(y/ies) for this specific problem?

I have a web-based interface for handing invoices, customer records and other transaction records which interacts currently with a database of all the aforementioned stored upon the same machine. As you can imagine, this is quite a simple set-up consisting of a web-app (PHP) and a database (MySQL). However, the ideal scenario is to keep the records on the machine they are currently on (easy) and move the web-app to another server within the same network (again, easy) ... but in addition, provide facilities on a public-facing website for managing accounts by customers and so forth. The problem is this - the public-facing web server is located in a completely separate location as it is a dedicated server provided by a well-known ISP.
What would be the best way to enable the records to be accessible from this other server whilst ensuring that all communications are secure. Speed is not a huge factor, although any outages on either side should be handled gracefully. Initially my thoughts went towards web services (XML-RPC/SOAP/Hessian), but these options seem to present difficulties (security being the main one, overcomplexity as well).
The web-app must remain PHP-based. The public-facing site is likely to be PHP-based as well, although Python (likely using Django) is another option. The introduction of any other technologies (Java etc) is not a problem, although it is preferred if they be Linux-friendly (so .NET would not be the best fit here).
Apologies if this question is somewhat verbose and vague. I am testing the water somewhat in regards to this kind of problem. Any advice or suggestions gratefully received.
I've done something similar. You can expose a web service to the internet that will do the database access, but requests to the service must match a strong hashed and salted password (which will be secured on the ISP's server in the DMZ.)
Either this or some sort of public/private key encryption scheme.
OK, this might seem a bit silly, but what if you just used mysql replication?
Instead of using all sorts of fancy web services, just have a master sql server on one machine, then have it replicate to another server that holds the slave sql server as well as the web app