What is the difference between 'SAS' and 'Salesforce' - sas

I would be starting ft in one company, where i was been told that the application is developed using 'Sas' and 'salesforce'. What is the difference between two?
And which are recommended online resource which I can use to learn more about it.

SAS is software for statistical analysis. If your company/job description doesn't look like working with large sets of data & complex reporting that's probably not it.
They probably mean SaaS (Software as a Service) model, also known as "the cloud", cloud computing etc. You write the program (or use / modify existing one) but you don't buy servers, worry about network connection, electricity costs, load balancing (spikes in traffic will not cause your website to go down). Many apps operate in this model. Microsoft's Azure cloud (or even online wersions of MS Office). There's Siebel Oracle on Demand CRM, Microsoft Dynamics, SAP I think also has SaaS offering...
It's a big topic, I'm simplifying a lot here. And then there are Platform as a Service things too (PaaS) where they give you "just" the hosting etc but no base application to build on top of. You write everything you need from scratch and upload it. Think Heroku or Amazon Web Services (AWS).
Salesforce is "just" one more SaaS application. You start with base application & database, similar to all other clients in the world. You can install plugins to it (some free, some paid), configure it yourself, write custom code if your functionality is too complex... You can do a lot with just clicks & drag & drop but if you need to code stuff then JavaScript (for client-side) and Apex (for server-side) will be your friend. Apex is bit similar to Java.
Where to start... Trailhead is good source of self-paced trainings. You can sign up for a free Salesforce Developer Edition (has almost all features as the paid one but limited storage space), try to pass some courses... Or in SF help&training there should be tons of videos (actually in that link whole left menu "getting started with salesforce" might be good).

Related

django deployment with java and c++

I have created a django app that contains c++ for some of the views as well as a java library. How would I deploy this app? What kind of hosting service allows for multiple languages? I have looked at EC2, GAE, and several platforms (like heroku) but I can't seem to find a definitive solution.
I have never deployed anything to the web so a simple explanation would be much appreciated.
PaaS stuff is probably not your best bet. If you want the scalability and associated buzzwords(muh 99.9999999999% availability because my servers are hosted in a parallel dimension without electrical storms, power outages, hurricanes, earthquakes, or nuclear holocausts) that comes with hosting your application on a huge web company's platform, check out IaaS(Infrastructure as a service) systems like Google's Compute Engine or AWS. With these you just get a virtual server (or servers), running your Linux distro of choice, and you can install and run whatever you please on them without being constrained to a specific platform like App Engine or Heroku(where you have to basically write your app to specifically run on that platform). If you plan on consuming a ton of bandwidth/resources from the get-go, you will almost certainly get a better deal using a dedicated server(s) from a small company.
Interested in what specifically you are executing C++ for in a Django view. Image/video processing?
Well. Deployment is not really something where a simple explanation helps much.
First I would check what the requirements to the operating system are (compilers, dependencies,…). That will maybe reduce the options quickly.
I guess that with a setup containing C++ & Java artifacts, the usual PaaS (GaE, Heroku,…) offerings will not be sufficient because they define the stack. And a mixture of Python/C++/Java is rather uncommon I'd say.
Choosing an IaaS offering (EC2, …) may be an option. There you can run your whole self-defined stack and have the possibility of easier scaling.
Hosting the application on your own server(s) is also always possible. Check your data protection regulations to find out if it's not even a requirement.
There are a lot of ways to get the Django application to run. The Django documentation has some information about deployment. If you have certain special requirements, uwsgi may be a good application server.
You may also want a web server in front of the application. Possibilities range from using uwsgi's built-in http server or using e.g. Nginx with uwsgi.
All in all every component of the whole "deployment" has hundereds of bells and whistels and it's not easy to give advice without knowing specific requirements and properties of the system itself. You'll also probably need a database you have to deploy.
But before deploying it to the web, it's also important to have a solid build process to assemble all the parts. And not only on the development machine. With three languages involved this should be the first step solve. If it easily and automagically deploys in a development environment, moving it to a server is easier.

What are the pros and cons of developing a web app using Parse vs. AWS?

From what I know, Parse offers convenient communication stacks for various platforms such as iOS, so it is easy to build clients that use your web app.
But Parse also seems to be tightly integrated with Facebook. If you were to build a web app that does not need Facebook, but that may integrate with Facebook in the long term, is Parse the clear winner over deploying directly to AWS, or are there important disadvantages to consider?
As far as I understand their page Parse is a PaaS (platform as a service) provider like Heroku and others while AWS is a IaaS (infrastructure as a service) provider.
Pros for PaaS:
They care about the infrastructure
You build your app on an existing platform
For the start you don't need "ops-guys" as you don't do ops
You can take their knowledge and prebuilt tools for your advance
Pros for IaaS:
You have full control about the underlaying infrastructure
You can start with a greenfield and build what ever you want
You can use tools like Puppet / Chef / ... to control your servers
You don't have to pay for the additional stuff you get when using PaaS
(but have to pay your people for it)
So there is not a winner of this "battle" but you have to decide whether you want to use prebuilt tooling and give some independence for this or whether you want to have the absolute control over everything (nearly as you can't touch the hardware) and invest time and manpower into building your own tooling.
"Better, Faster, Cheaper.."
If you are pursuing mobile first strategy, Parse is a great tool for bootstrapping a mature, full web-presence from nothing more than an original beta app.
I dont have direct experience with AWS.
I have used Heroku/Parse integrating (very quickly) a stand alone mobile app with the back-end where the back end needs to cover following:
DB/persistence/noSql
Workflow - async tasks
REST API interface HTTP
Once the mobile app existed with only stubbed local data , Parse allowed a single engineer to build out ALL infrastructure mentioned above very quickly, taking the app from single user to multi-user with full DB and workflow that backs client side events with considerable server-side and cloud side business logic and process. Scaling related startup stuff that used to take weeks took only days.
The compression (time&money) when scaling up an app stack is really something. The Parse API did almost everything that i needed with one small exception (remuxing UGC media).
Personally, i abandoned the parse/android SDK in favor of a more robust REST API (threading on client-side and heavy HTTP activity ).
Developers used to Curl/REST dev stacks will take to Parse.

Help emulating Heroku, GAE, etc : Building a web service privately (PaaS)

I'm not the only one with this question, but haven't found a lot of information in my research so far, so help me out.
We are a small IT crowd in an organization. We're looking to build a small, private service that would emulate a heroku/gae workflow. The basics of this: deploy an app as a git repository, and have it scale in a 'cloud' environment. Basically, a platform as a service (Paas).
Pretend we are amateur PM's, programmers, and sysadmins tasked with this. What would you recommend? We know generally what is needed: some sort of routing, database, caching, authentication, etc. What other tools do we need?
We would prefer tools along a ruby/python/haskell/erlang dimension, on a linux/bsd stack, with postgres databases(couchdb or cassandra in the future). We are not touching anything in the ms/.net area, nothing on the JVM (We've looked at Steamcannon, but no; Scala and Clojure tools are not entirely out of the question). We have a basic grasp of bootstrapping a cloud (e.g. Eucalyptus) to build on. We have an understanding of the basics in server admin, and the physical infrastructure limitations aren't a factor right now.
We're not looking into why gaerokuyardspace is the best choice, a list of such services, why we should ditch our plans for one of these services, or an argument against this plan. For this situation the decision has been made that the cost to build privately is more attractive than the cost of deploying elsewhere. We already know why and how for these services. We're looking to emulate and build upon these for private needs.
A short list of tools to be expanded:
Beehive
Steamcannon
Gitosis/Gitolite
?
Basically, I'd like to generate a list of tools for building heroku/gae like service on a small, private, definitely experimental/toy level.
I don't know that it will meet all of your stated needs today, but you should take a look at Cloud Foundry from VMware. You can check the FAQ for the commercial project or look in to the Open Source version that you can host and manage yourself.
Some combination of Cloud Foundry (above) gitolite, and fabric
will probably do well for you. Any such solution will take some time to get right.
(Disclaimer: I'm a lead developer on the AppScale project)
AppScale is pretty much right up your alley, especially if you're looking to run Google App Engine apps in your own private cloud. It's open source, so grab it and extend it if there are other types of apps you want to support (and definitely commit it back to us if you do).

SOA / ESB Dilemma

Sorry for the very involved question, but this is something I've been researching for a while now and it is really frustrating me. I feel like in today's age we have a million and one ways to implement services tat are cross-platform (SOAP) and easy to build (thanks to .NET, java, and other frameworks). However, these technologies have been in the community for 5-10 years, but we are (or at least I am) constantly plagued with the same issues:
Identification (Tracking services) - UDDI; e.g., had to remind a co-worker the 3 times this month where a service is at, despite the fact there is a wiki that discusses the service and a PDF version of the same documentation that lives in a repository where we keep our service docs.
Scalability - Out of the box clustering; As organizations, we spend a lot of money on paying our admins just to watch the utilization of our services and make decisions like, does this service need more RAM, more CPU, more interfaces? How do I load balance this?
Monitoring - error logging, etc; I can't count how many times I have to set up tracing on services in order to see why a bug is happening that only seems to affect one customer, or have to code logic into the service to serialize exceptions, log exceptions to dbs, fail gracefully, etc.
Deployment - easy to deploy; none of this deploying DLLs to 5 load balanced servers
Each one of these problems requires some type of custom solution implemented by the organization. Documentation and UDDIs for #1. Virtualization and load balancing hardware / software for #2. Tracing, writing exceptions to databases / logs, etc for #3. Custom deployment software for #4. I work for a mid-sized organization. I can't even imagine how a company the size of Sun, Google, or Microsoft would tackle these dilemmas.
Maybe my vision is unrealistic, but I dream of having a Framework per se that lives on top of a server cluster that manages all of the above. I was ecstatic to read about Microsoft's AppFabric since it really seems to extend some of the functionality of BizTalk to WCF service implementors: Caching, Hosting, Monitoring, etc. However, from what I've seen, I still don't feel it lives up to my dream for an all-in-one solution that assists the developer and organization in writing services that are scaled across clusters easily, deployed into the cluster easily, and identifiable, possibly even version-able.
So, I don't mean this post to be about my dream. I do actually have a question. For starters, is my dream / want completely unrealistic? Furthermore, what solutions are there available that attempt to solve these problems without confining us to a new and more proprietary way (BizTalk) of developing services? An lastly, in concern to a complete SOA / ESB solution, where do we see the most potential in the market right now or in the future?
I think that you are talking about different kinds of problems here.
1). Developers who don't read documentation. This is an endemic problem, not limited to SOA - just look at some questions on StackOverflow. At least the developer is asking you whether there is a service, rather then just duplicating logic in their own code. I don't see any technical solution to these kinds of problems, you've already provided good registries and documentation, but some developers prefer to talk to people. Maybe, even, this is actually a good thing - human interaction has value above the technical content of the interaction. Or maybe, you're too nice: "No, I won't answer that question, look it up."
2). Scaling. There are technologies addressing this issue. (Disclaimer I work for IBM, who sell some, so I'll reference these - I'm not intending to imply that IBM are the only vendor with solutions in this space.) There are products such as this that can provision a new machine, install a software stack and add it to a cluster to address workload changes. Then at a finer grained level of control in the Java EE world the Application Server can dynamically shape traffic and adjust clusters. See WebSphere Virtual Enterprise
3). Monitoring. I don't "get" what you expect here. In all likelyhood such tricky bugs will require application level trace. For some problems such as finding memory leaks and performance bottlenecks there are very good tools, at least in the Java EE world.
4). I can't speak to the .Net world, but I'd say that Java EE app servers do a reasonable job of deploying the apps across clusters smoothly, and in the cases where we use JNI and need DLLs deploying then we can use products such as the Tivoli stack I mention to manage this.
So, in summary, I do think that vendors are trying to address these issues. And I don't think your life would be simpler without SOA. Imagine instead the same problems applied to myriad separate, independent applications.
Here's my two cents.
I've been a developer at a company that used SOA incorrectly. The worst solution they implemented was field level validation of form elements on a desktop app using SOA. To perform acceptably these require very low latency. A 2-4 second wait to change to a new field gets old fast. The service ran over the network on a biztalk server. Everyone hated it.
If you're going to do this you really need to spend a lot of time dealing with network latency, service failure, timing, and timeout issues.
Don't get carried away and think SOA is the solution to every problem. Used at a high level it's great, used at a low level it makes your applications fragile, slow, and impossible to debug.
If you talk to IBM or one of the big SOA vendors, they got a products that cover each scenario.
Identification (Tracking services) - UDDI; e.g., had to remind a co-worker the 3 times this month where a service is at, despite the fact there is a wiki that discusses the service and a PDF version of the same documentation that lives in a repository where we keep our service docs.
Registry and Repository server. Nice thing is that it does governance (promotion, demotion, versioning, approval) and your ESB typically does a "lookup" for the latest and greatest against the register server.
Scalability - Out of the box clustering; As organizations, we spend a lot of money on paying our admins just to watch the utilization of our services and make decisions like, does this service need more RAM, more CPU, more interfaces? How do I load balance this?
Transaction monitoring software like IBM Tivoli Composite Application Manager for SOA. Basically, it tracks things from a horizontal point of view and to see if there is a service disruption from a end user/end app point of view.
As far as your clustering.... you have to pick good middleware and architecture. Personally speaking, get stuff that is "cloud" ready. App Servers with NoSQL connected by MOM.
Monitoring - error logging, etc; I can't count how many times I have to set up tracing on services in order to see why a bug is happening that only seems to affect one customer, or have to code logic into the service to serialize exceptions, log exceptions to dbs, fail gracefully, etc.
Enterprise standards for your developers and for your vendors. Integration of all business and system events into a single dashboard. (Most companies spilt them). This is done already at most enterprise shops.
Deployment - easy to deploy; none of this deploying DLLs to 5 load balanced servers
Ahh.. Microsoft IIS Web Deployment Tool 2.0. You can sync 100s of MS servers by just updating the master. It's really easy.

Identifying ASP.NET web service references

At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.