Socket programing vs. web service? [closed] - web-services

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to create a Mobile messaging service, but I don't know which is better to use a socket programming or a web service ?.
What are the concerns that I need to take in consideration when creating such service ? such as cost of connection .. etc.
If you need more details just tell me before voting the question down or to be closed !

Webservices are generally speaking "easier" to do, thanks to the tremendous interest in them and the support for them in developer tools and through libraries and frameworks.
However, especially if your payload is small (think messages the size of a typical SMS or tweet), the overhead you create with webservices is prohibitive: bytes sent over a wireless network like GPRS or UMTS are still very expensive, compared to bytes carried over cable or ADSL. And web services carry several layers of invisible info around that the end customer will also have to pay.
So, if your use case is based on short messages, I'd at least advise to do some bandwidth simulation calculations, and base your decision on bandwidth savings vs increased complexity of your app.
While looking at sockets, also have a look at UDP: if you can live with the fact that basically you throw someone a packet, and without designing some ack mechanism into your protocol you'll never be sure the message arrived, it's very efficient because there is no traffic to create and maintain a connection, and even pretty long messages can very well be transported inside 1 UDP packet.
EDIT based on comment:
stream socket: not sure how you define streams, but streams and messages are two very distinct concepts for me, a stream is a typically longer sequence of data being sent, whereas a message is an entity that's sent, and optionally acknowledged or answered by the receiver.
bandwidth simulation: the easiest way to understand what I'm talking about is to get Wireshark and add up everything that gets transported across the net to do a simple request of a very short string - you'll see several layers of administrative info (ie invisible, just there to make the different protocol layers work) that are all traffic paid for by the end user. Then, write a small mock service using UDP to transport the same message, or use a tool like netcat, good tutorial here, and add up the bytes that get transported. You'll see pretty huge differences in the number of bytes that are exchanged.
EDIT2, something I forgot to mention: mobile networks used to be open, transparent networks with devices identified by public IP addresses. There's a rapid evolution towards NATed mobile networks which has its impact on how devices inside and outside this "walled garden" can communicate (NAT traversal). You'll need to take this into account when designing your communication channel.
As for the use of streams for a chat application: it offers some conceptual advantages, but you can very well layer a chat app on top of UDP, look here or here

Related

Is there a "serverless" solution for a (gaming) server? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am working on a game atm and started to research about an alternative solution to dedicated servers.
The reason I am doing this is that I have little administration knowledge and would prefer to spend the time in programming instead of learning that. Especially when it comes to security to the dedicated machine itself and not my application that runs on it.
So my question is whether there is a possibility of a server where you can just run your code written e.g. in golang and have less or easier administration and a much better or the same security?
It would be perfect if I just get an endpoint connection to my application when a client wants to communicate without the care of security concerns outside of my application.
I have looked on some services from aws and google (not tested for now) but with the whole range everything is confusing to me.
Information about the type of game:
realtime multiplayer
for now I use TCP with Flatbuffer to communicate (TCP should be also fine here)
server is written in golang and for the client for testing I use libgdx (java), but would probably change to something else when I solved other questions
with server I mean where the logic is run server-side and the communication between the client to the database is made through the server
If you really want to avoid maintaining a server for your software, consider dockerizing your server software, and run it on AWS Fargate. It will not necessarily be cheaper than a dedicated server, but you will not have to maintain any infrastructure or OS.
Short anwser, no.
Serverless architectures doesn't work well with TCP, because of various reasons. The First one to be that FaaS is made to be short lived - opening a TCP connection with sockets would create a long live for the function, thus making its costs go high.
Second, BaaS usually has a delay of some microseconds, and also doesn't support TCP connections.
But you see, with BaaS you can develop a turn-based multiplayer game, because the delay isn't that critical on these types of applications.

c++ applcation for linux to convert ipv4 packet to ipv6 [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm looking to develope a c/c++ application for linux that converts ipv4 packet received to ipv6 and viceversa ( losing some ipv6 only features )
step 1: how to receive all necessary info from an incoming packet? Should I use raw packet library to read all TCP/UDP flags and info about the packet?
Any documentation about that? ( I'm already looking at beej.us guide)
step2: i'm looking to use this program into a linux machine (ie Ubuntu) as a router to forward all packets received from an ipv6 machine to a net card, to an ipv4 machine connected to the ipv4 card on the router.
How to receive and parse all packets in this application ( except the packets directed to the router machine IP ) ? Is it possible in the 'application level' or should I touch the kernel? If yes, where could i get some documentation about this?
Goal: have an http or other common protocol works between the 2 machines connected via the router
Greatly appreciate any hints
Since converting between IPv4 and IPv6 necessarily implies changing IP addresses in the packet, NAT is required by definition. Your project comes down to implenting a NAT router.
Read up on NAT64 to find out more about the particular flavour of NAT you are looking for.
In the course of trying to implement a router in userspace, I think that tun devices are probably the best design choice for sending and receiving packets. This is in fact the approach chosen by TAYGA (the first NAT64 implementation listed on the above-cited Wikipedia page).
Implementing a router (of any type, let alone a NAT) in userspace is a fairly complex and ambitious project, so the best two pieces of advice I can give are:
Do not implement this yourself. Instead, contribute your efforts to improving one of the existing open source implementations.
Failing that, study one of the existing open source implementations for inspiration.

C++ web crawler [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am experimenting and attempting to make a minimal web crawler. I understand the whole process at a very high level. So getting into the next layer of details, how does a program 'connect' to different websites to extract the HTML?
Am I using Sockets to connect to servers and sending http requests? Am I giving commands to the terminal to run telnet or ssh?
Also, is C++ a good language of choice for a web crawler?
Thanks!
Also, is C++ a good language of choice for a web crawler?
Depends. How good are you at C++.
C++ is a good language to write an advanced high speed crawler in, because of its speed (and you need that to processes the HTML pages). But it is not the easiest language to write a crawler with so probably not a good choice if you are experimenting.
Based on your question you don't have the experience to write an advanced crawler so are probably looking to build a simple serial crawler. For this speed is not a priority as the bottleneck is the download of the page across the web (not the processing of the page). So I would pick another language (maybe python).
Short answer, no. I prefer coding in C++ but this instance calls for a Java application. The API has many html parsers plus built in socket protocols. This project will be a pain in C++. I coded one once in java and it was somewhat blissful.
BTW, there are many web crawlers out there but I assume you have custom needs :-)
If you plan to stick with C++, then you should consider using the libcurl library, instead of implementing the HTTP protocol from scratch using sockets. There are C++ bindings available for that library.
From curl's webpage:
libcurl is a free and easy-to-use client-side URL transfer library,
supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS,
LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet
and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP
uploading, HTTP form based upload, proxies, cookies, user+password
authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file
transfer resume, http proxy tunneling and more!
libcurl is highly portable, it builds and works identically on
numerous platforms, including Solaris, NetBSD, FreeBSD, OpenBSD,
Darwin, HPUX, IRIX, AIX, Tru64, Linux, UnixWare, HURD, Windows, Amiga,
OS/2, BeOs, Mac OS X, Ultrix, QNX, OpenVMS, RISC OS, Novell NetWare,
DOS and more...
libcurl is free, thread-safe, IPv6 compatible, feature rich, well
supported, fast, thoroughly documented and is already used by many
known, big and successful companies and numerous applications.

How are Massively Multiplayer Online RPGs built?

How are Massively Multiplayer Online RPG games built?
What server infrastructure are they built on? especially with so many clients connected and communicating in real time.
Do they manage with scripts that execute on page requests? or installed services that run in the background and manage communication with connected clients?
Do they use other protocols? because HTTP does not allow servers to push data to clients.
How do the "engines" work, to centrally process hundreds of conflicting gameplay events?
Thanks for your time.
Many roads lead to Rome, and many architectures lead to MMORPG's.
Here are some general thoughts to your bullet points:
The server infrastructure needs to support the ability to scale out... add additional servers as load increases. This is well-suited to Cloud Computing by the way. I'm currently running a large financial services app that needs to scale up and down depending on time of day and time of year. We use Amazon AWS to almost instantly add and remove virtual servers.
MMORPG's that I'm familiar with probably don't use web services for communication (since they are stateless) but rather a custom server-side program (e.g. a service that listens for TCP and/or UDP messages).
They probably use a custom TCP and/or UDP based protocol (look into socket communication)
Most games are segmented into "worlds", limiting the number of players that are in the same virtual universe to the number of game events that one server (probably with lots of CPU's and lots of memory) can reasonably process. The exact event processing mechanism depends on the requirements of the game designer, but generally I expect that incoming events go into a priority queue (prioritized by time received and/or time sent and probably other criteria along the lines of "how bad is it if we ignore this event?").
This is a very large subject overall. I would suggest you check over on Amazon.com for books covering this topic.
What server infrastructure are they built on? especially with so many clients connected and communicating in real time.
I'd guess the servers will be running on Linux, BSD or Solaris almost 99% of the time.
Do they manage with scripts that execute on page requests? or installed services that run in the background and manage communication with connected clients?
The server your client talks to will be a server running a daemons or service that sits idle listening for connections. For instances (dungeons), usually a new process is launched for each group, which would mean there is a dispatcher service somewhere mananging this (analogous to a threadpool)
Do they use other protocols? because HTTP does not allow servers to push data to clients.
UDP is the protocol used. It's fast as it makes no guarantees the packet will be received. You don't care if a bit of latency causes the client to lose their world position.
How do the "engines" work, to centrally process hundreds of conflicting gameplay events?
Most MMOs have zones which limit this to a certain amount of people. For those that do have 100s of people in one area, there is usually high latency. The server is having to deal with 100s of spells being sent its way, which it must calculate damage amounts for each one. For the big five MMOs I imagine there are teams of 10-20 very intelligent, mathematically gifted developers working on this daily and there isn't a MMO out there that has got it right yet, most break after 100 players.
--
Have a look for Wowemu (there's no official site and I don't want to link to a dodgy site). This is based on ApireCore which is an MMO simulator, or basically a reverse engineer of the WoW protocol. This is what the private WoW servers run off. From what I recall Wowemu is
mySQL
Python
However ApireCore is C++.
The backend for Wowemu is amazingly simple (I tried it in 2005 however) and probably a complete over simplification of the database schema. It does gives you a good idea of what's involved.
Because MMOs by and large require the resources of a business to develop and deploy, at which point they are valuable company IP, there isn't a ton of publicly available information about implementations.
One thing that is fairly certain is that since MMOs by and large use a custom client and 3D renderer they don't use HTTP because they aren't web browsers. Online games are going to have their own protocols built on top of TCP/IP or UDP.
The game simulations themselves will be built using the same techniques as any networked 3D game, so you can look towards resources for that problem domain to learn more.
For the big daddy, World of Warcraft, we can guess that their database is Oracle because Blizzard's job listings frequently cite Oracle experience as a requirement/plus. They use Lua for user interface scripting. C++ and OpenGL (for Mac) and Direct3D (for PC) can be assumed as the implementation languages for the game clients because that's what games are made with.
One company that is cool about discussing their implementation is CCP, creators of Eve online. They have published a number of presentations and articles about Eve's infrastructure, and it is a particularly interesting case because they use Stackless Python for a lot of Eve's implementation.
http://www.disinterest.org/resource/PyCon2006-StacklessInEve.wmv
http://us.pycon.org/2009/conference/schedule/event/91/
There was also a recent Game Developer Magazine article on Eve's architecture:
https://store.cmpgame.com/product/3359/Game-Developer-June%7B47%7DJuly-2009-Issue---Digital-Edition
The Software Engineering radio podcast had an episode with Jim Purbrick about Second Life which discusses servers, worlds, scaling and other MMORPG internals.
Traditionally MMOs have been based on C++ server applications running on Linux communicating with a database for back end storage and fat client applications using OpenGL or DirectX.
In many cases the client and server embed a scripting engine which allows behaviours to be defined in a higher level language. EVE is notable in that it is mostly implemented in Python and runs on top of Stackless rather than being mostly C++ with some high level scripts.
Generally the server sits in a loop reading requests from connected clients, processing them to enforce game mechanics and then sending out updates to the clients. UDP can be used to minimize latency and the retransmission of stale data, but as RPGs generally don't employ twitch gameplay TCP/IP is normally a better choice. Comet or BOSH can be used to allow bi-directional communications over HTTP for web based MMOs and web sockets will soon be a good option there.
If I were building a new MMO today I'd probably use XMPP, BOSH and build the client in JavaScript as that would allow it to work without a fat client download and interoperate with XMPP based IM and voice systems (like gchat). Once WebGL is widely supported this would even allow browser based 3D virtual worlds.
Because the environments are too large to simulate in a single process, they are normally split up geographically between processes each of which simulates a small area of the world. Often there is an optimal population for a world, so multiple copies (shards) are run which different sets of people use.
There's a good presentation about the Second Life architecture by Ian Wilkes who was the Director of Operations here: http://www.infoq.com/presentations/Second-Life-Ian-Wilkes
Most of my talks on Second Life technology are linked to from my blog at: http://jimpurbrick.com
Take a look at Erlang. It's a concurrent programming language and runtime system, and was designed to support distributed, fault-tolerant, soft-real-time, non-stop applications.

Tool to monitor HTTP, TCP, etc. Web Service traffic [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What's the best tool that you use to monitor Web Service, SOAP, WCF, etc. traffic that's coming and going on the wire? I have seen some tools that made with Java but they seem to be a little crappy. What I want is a tool that sits in the middle as a proxy and does port redirection (which should have configurable listen/redirect ports). Are there any tools work on Windows to do this?
For Windows HTTP, you can't beat Fiddler. You can use it as a reverse proxy for port-forwarding on a web server. It doesn't necessarily need IE, either. It can use other clients.
Wireshark does not do port redirection, but sniffs and interprets a lot of protocols.
You might find Microsoft Network Monitor helpful if you're on Windows.
Wireshark (or Tshark) is probably the defacto standard traffic inspection tool. It is unobtrusive and works without fiddling with port redirecting and proxying. It is very generic, though, as does not (AFAIK) provide any tooling specifically to monitor web service traffic - it's all tcp/ip and http.
You have probably already looked at tcpmon but I don't know of any other tool that does the sit-in-between thing.
I tried Fiddler with its reverse proxy ability which is mentioned by #marxidad and it seems to be working fine, since Fiddler is a familiar UI for me and has the ability to show request/responses in various formats (i.e. Raw, XML, Hex), I accept it as an answer to this question. One thing though. I use WCF and I got the following exception with reverse proxy thing:
The message with To 'http://localhost:8000/path/to/service' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree
I have figured out (thanks Google, erm.. I mean Live Search :p) that this is because my endpoint addresses on server and client differs by port number. If you get the same exception consult to the following MSDN forum message:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2302537&SiteID=1
which recommends to use clientVia Endpoint Behavior explained in following MSDN article:
http://msdn.microsoft.com/en-us/magazine/cc163412.aspx
I've been using Charles for the last couple of years. Very pleased with it.
I second Wireshark. It is very powerful and versatile.
And since this tool will work not only on Windows but also on Linux or Mac OSX, investing your time to learn it (quite easy actually) makes sense. Whatever the platform or the language you use, it makes sense.
Regards,
Richard
Just Programmer
http://sili.co.nz/blog
I find WebScarab very powerful
Check out Paros Proxy.
JMeter's built-in proxy may be used to record all HTTP request/response information.
Firefox "Live HTTP headers" plugin may be used to see what is happening on the browser side when sending/receiving request.
Firefox "Tamper data" plugin may be useful when you need to intercept and modify request.
I use LogParser to generate graphs and look for elements in IIS logs.